00:00:00.001 Started by upstream project "autotest-per-patch" build number 121268 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 21687 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.095 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.203 Using shallow fetch with depth 1 00:00:00.203 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.203 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/39/22839/5 # timeout=5 00:00:05.625 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.636 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.648 Checking out Revision 415cb19f136d7a4a8ee08a5c51a72ee2989a84eb (FETCH_HEAD) 00:00:05.648 > git config core.sparsecheckout # timeout=10 00:00:05.660 > git read-tree -mu HEAD # timeout=10 00:00:05.677 > git checkout -f 415cb19f136d7a4a8ee08a5c51a72ee2989a84eb # timeout=5 00:00:05.698 Commit message: "jobs/autotest-upstream: Enable ASan, UBSan on all jobs" 00:00:05.698 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:05.812 [Pipeline] Start of Pipeline 00:00:05.830 [Pipeline] library 00:00:05.831 Loading library shm_lib@master 00:00:05.831 Library shm_lib@master is cached. Copying from home. 00:00:05.851 [Pipeline] node 00:00:05.867 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.869 [Pipeline] { 00:00:05.880 [Pipeline] catchError 00:00:05.881 [Pipeline] { 00:00:05.890 [Pipeline] wrap 00:00:05.898 [Pipeline] { 00:00:05.905 [Pipeline] stage 00:00:05.906 [Pipeline] { (Prologue) 00:00:06.067 [Pipeline] sh 00:00:06.353 + logger -p user.info -t JENKINS-CI 00:00:06.371 [Pipeline] echo 00:00:06.372 Node: WFP8 00:00:06.380 [Pipeline] sh 00:00:06.679 [Pipeline] setCustomBuildProperty 00:00:06.690 [Pipeline] echo 00:00:06.691 Cleanup processes 00:00:06.694 [Pipeline] sh 00:00:06.974 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.974 2146618 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.988 [Pipeline] sh 00:00:07.271 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.271 ++ grep -v 'sudo pgrep' 00:00:07.271 ++ awk '{print $1}' 00:00:07.271 + sudo kill -9 00:00:07.271 + true 00:00:07.285 [Pipeline] cleanWs 00:00:07.294 [WS-CLEANUP] Deleting project workspace... 00:00:07.294 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.300 [WS-CLEANUP] done 00:00:07.304 [Pipeline] setCustomBuildProperty 00:00:07.316 [Pipeline] sh 00:00:07.595 + sudo git config --global --replace-all safe.directory '*' 00:00:07.670 [Pipeline] nodesByLabel 00:00:07.671 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.680 [Pipeline] httpRequest 00:00:07.684 HttpMethod: GET 00:00:07.685 URL: http://10.211.164.96/packages/jbp_415cb19f136d7a4a8ee08a5c51a72ee2989a84eb.tar.gz 00:00:07.687 Sending request to url: http://10.211.164.96/packages/jbp_415cb19f136d7a4a8ee08a5c51a72ee2989a84eb.tar.gz 00:00:07.690 Response Code: HTTP/1.1 200 OK 00:00:07.690 Success: Status code 200 is in the accepted range: 200,404 00:00:07.690 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_415cb19f136d7a4a8ee08a5c51a72ee2989a84eb.tar.gz 00:00:08.454 [Pipeline] sh 00:00:08.735 + tar --no-same-owner -xf jbp_415cb19f136d7a4a8ee08a5c51a72ee2989a84eb.tar.gz 00:00:08.756 [Pipeline] httpRequest 00:00:08.761 HttpMethod: GET 00:00:08.762 URL: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:08.763 Sending request to url: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:08.765 Response Code: HTTP/1.1 200 OK 00:00:08.765 Success: Status code 200 is in the accepted range: 200,404 00:00:08.766 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:25.152 [Pipeline] sh 00:00:25.435 + tar --no-same-owner -xf spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:00:27.983 [Pipeline] sh 00:00:28.264 + git -C spdk log --oneline -n5 00:00:28.264 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:00:28.264 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:00:28.264 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:00:28.264 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:28.264 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:00:28.277 [Pipeline] } 00:00:28.294 [Pipeline] // stage 00:00:28.304 [Pipeline] stage 00:00:28.306 [Pipeline] { (Prepare) 00:00:28.324 [Pipeline] writeFile 00:00:28.340 [Pipeline] sh 00:00:28.618 + logger -p user.info -t JENKINS-CI 00:00:28.632 [Pipeline] sh 00:00:28.915 + logger -p user.info -t JENKINS-CI 00:00:28.929 [Pipeline] sh 00:00:29.262 + cat autorun-spdk.conf 00:00:29.262 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.262 SPDK_TEST_NVMF=1 00:00:29.262 SPDK_TEST_NVME_CLI=1 00:00:29.262 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.262 SPDK_TEST_NVMF_NICS=e810 00:00:29.262 SPDK_TEST_VFIOUSER=1 00:00:29.262 SPDK_RUN_ASAN=1 00:00:29.262 SPDK_RUN_UBSAN=1 00:00:29.262 NET_TYPE=phy 00:00:29.270 RUN_NIGHTLY=0 00:00:29.276 [Pipeline] readFile 00:00:29.303 [Pipeline] withEnv 00:00:29.306 [Pipeline] { 00:00:29.318 [Pipeline] sh 00:00:29.602 + set -ex 00:00:29.602 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:29.602 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:29.602 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.602 ++ SPDK_TEST_NVMF=1 00:00:29.602 ++ SPDK_TEST_NVME_CLI=1 00:00:29.602 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:29.602 ++ SPDK_TEST_NVMF_NICS=e810 00:00:29.602 ++ SPDK_TEST_VFIOUSER=1 00:00:29.602 ++ SPDK_RUN_ASAN=1 00:00:29.602 ++ SPDK_RUN_UBSAN=1 00:00:29.602 ++ NET_TYPE=phy 00:00:29.602 ++ RUN_NIGHTLY=0 00:00:29.602 + case $SPDK_TEST_NVMF_NICS in 00:00:29.602 + DRIVERS=ice 00:00:29.602 + [[ tcp == \r\d\m\a ]] 00:00:29.602 + [[ -n ice ]] 00:00:29.602 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:29.602 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:29.602 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:29.602 rmmod: ERROR: Module irdma is not currently loaded 00:00:29.602 rmmod: ERROR: Module i40iw is not currently loaded 00:00:29.602 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:29.602 + true 00:00:29.602 + for D in $DRIVERS 00:00:29.602 + sudo modprobe ice 00:00:29.602 + exit 0 00:00:29.611 [Pipeline] } 00:00:29.626 [Pipeline] // withEnv 00:00:29.630 [Pipeline] } 00:00:29.645 [Pipeline] // stage 00:00:29.652 [Pipeline] catchError 00:00:29.653 [Pipeline] { 00:00:29.662 [Pipeline] timeout 00:00:29.662 Timeout set to expire in 40 min 00:00:29.663 [Pipeline] { 00:00:29.673 [Pipeline] stage 00:00:29.675 [Pipeline] { (Tests) 00:00:29.687 [Pipeline] sh 00:00:29.965 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.965 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.965 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.965 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:29.965 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:29.965 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:29.965 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:29.965 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:29.965 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:29.965 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:29.965 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:29.965 + source /etc/os-release 00:00:29.965 ++ NAME='Fedora Linux' 00:00:29.965 ++ VERSION='38 (Cloud Edition)' 00:00:29.965 ++ ID=fedora 00:00:29.965 ++ VERSION_ID=38 00:00:29.965 ++ VERSION_CODENAME= 00:00:29.965 ++ PLATFORM_ID=platform:f38 00:00:29.965 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:29.965 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:29.965 ++ LOGO=fedora-logo-icon 00:00:29.965 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:29.965 ++ HOME_URL=https://fedoraproject.org/ 00:00:29.965 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:29.965 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:29.965 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:29.965 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:29.965 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:29.965 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:29.965 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:29.965 ++ SUPPORT_END=2024-05-14 00:00:29.965 ++ VARIANT='Cloud Edition' 00:00:29.965 ++ VARIANT_ID=cloud 00:00:29.965 + uname -a 00:00:29.965 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:29.965 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:32.500 Hugepages 00:00:32.500 node hugesize free / total 00:00:32.500 node0 1048576kB 0 / 0 00:00:32.500 node0 2048kB 0 / 0 00:00:32.500 node1 1048576kB 0 / 0 00:00:32.500 node1 2048kB 0 / 0 00:00:32.500 00:00:32.500 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:32.500 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:32.500 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:32.501 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:32.501 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:32.501 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:32.501 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:32.501 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:32.501 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:32.501 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:32.501 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:32.501 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:32.501 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:32.501 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:32.501 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:32.501 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:32.501 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:32.501 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:32.501 + rm -f /tmp/spdk-ld-path 00:00:32.501 + source autorun-spdk.conf 00:00:32.501 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.501 ++ SPDK_TEST_NVMF=1 00:00:32.501 ++ SPDK_TEST_NVME_CLI=1 00:00:32.501 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.501 ++ SPDK_TEST_NVMF_NICS=e810 00:00:32.501 ++ SPDK_TEST_VFIOUSER=1 00:00:32.501 ++ SPDK_RUN_ASAN=1 00:00:32.501 ++ SPDK_RUN_UBSAN=1 00:00:32.501 ++ NET_TYPE=phy 00:00:32.501 ++ RUN_NIGHTLY=0 00:00:32.501 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:32.501 + [[ -n '' ]] 00:00:32.501 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:32.501 + for M in /var/spdk/build-*-manifest.txt 00:00:32.501 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:32.501 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:32.501 + for M in /var/spdk/build-*-manifest.txt 00:00:32.501 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:32.501 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:32.501 ++ uname 00:00:32.501 + [[ Linux == \L\i\n\u\x ]] 00:00:32.501 + sudo dmesg -T 00:00:32.501 + sudo dmesg --clear 00:00:32.501 + dmesg_pid=2147527 00:00:32.501 + [[ Fedora Linux == FreeBSD ]] 00:00:32.501 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.501 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:32.501 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:32.501 + [[ -x /usr/src/fio-static/fio ]] 00:00:32.501 + export FIO_BIN=/usr/src/fio-static/fio 00:00:32.501 + FIO_BIN=/usr/src/fio-static/fio 00:00:32.501 + sudo dmesg -Tw 00:00:32.501 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:32.501 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:32.501 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:32.501 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.501 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:32.501 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:32.501 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.501 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:32.501 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:32.501 Test configuration: 00:00:32.501 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.501 SPDK_TEST_NVMF=1 00:00:32.501 SPDK_TEST_NVME_CLI=1 00:00:32.501 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.501 SPDK_TEST_NVMF_NICS=e810 00:00:32.501 SPDK_TEST_VFIOUSER=1 00:00:32.501 SPDK_RUN_ASAN=1 00:00:32.501 SPDK_RUN_UBSAN=1 00:00:32.501 NET_TYPE=phy 00:00:32.501 RUN_NIGHTLY=0 15:43:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:32.501 15:43:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:32.501 15:43:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:32.501 15:43:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:32.501 15:43:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.501 15:43:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.501 15:43:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.501 15:43:12 -- paths/export.sh@5 -- $ export PATH 00:00:32.501 15:43:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:32.501 15:43:12 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:32.501 15:43:12 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:32.501 15:43:12 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714138992.XXXXXX 00:00:32.501 15:43:12 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714138992.0vmhrk 00:00:32.501 15:43:12 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:32.501 15:43:12 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:32.501 15:43:12 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:32.501 15:43:12 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:32.501 15:43:12 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:32.501 15:43:12 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:32.501 15:43:12 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:32.501 15:43:12 -- common/autotest_common.sh@10 -- $ set +x 00:00:32.501 15:43:12 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user' 00:00:32.501 15:43:12 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:32.501 15:43:12 -- pm/common@17 -- $ local monitor 00:00:32.501 15:43:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.501 15:43:12 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2147561 00:00:32.501 15:43:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.501 15:43:12 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2147563 00:00:32.501 15:43:12 -- pm/common@21 -- $ date +%s 00:00:32.501 15:43:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.501 15:43:12 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2147565 00:00:32.501 15:43:12 -- pm/common@21 -- $ date +%s 00:00:32.501 15:43:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:32.501 15:43:12 -- pm/common@21 -- $ date +%s 00:00:32.501 15:43:12 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2147569 00:00:32.501 15:43:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714138992 00:00:32.501 15:43:12 -- pm/common@26 -- $ sleep 1 00:00:32.501 15:43:12 -- pm/common@21 -- $ date +%s 00:00:32.501 15:43:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714138992 00:00:32.501 15:43:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714138992 00:00:32.501 15:43:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714138992 00:00:32.760 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714138992_collect-cpu-load.pm.log 00:00:32.760 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714138992_collect-vmstat.pm.log 00:00:32.760 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714138992_collect-bmc-pm.bmc.pm.log 00:00:32.760 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714138992_collect-cpu-temp.pm.log 00:00:33.696 15:43:13 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:33.696 15:43:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:33.696 15:43:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:33.696 15:43:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.696 15:43:13 -- spdk/autobuild.sh@16 -- $ date -u 00:00:33.696 Fri Apr 26 01:43:13 PM UTC 2024 00:00:33.696 15:43:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:33.696 v24.05-pre-449-g8571999d8 00:00:33.696 15:43:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:33.696 15:43:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:33.696 15:43:13 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:33.696 15:43:13 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:33.696 15:43:13 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.696 ************************************ 00:00:33.696 START TEST asan 00:00:33.697 ************************************ 00:00:33.697 15:43:13 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:00:33.697 using asan 00:00:33.697 00:00:33.697 real 0m0.000s 00:00:33.697 user 0m0.000s 00:00:33.697 sys 0m0.000s 00:00:33.697 15:43:13 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:33.697 15:43:13 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.697 ************************************ 00:00:33.697 END TEST asan 00:00:33.697 ************************************ 00:00:33.697 15:43:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:33.697 15:43:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:33.697 15:43:13 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:33.697 15:43:13 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:33.697 15:43:13 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.955 ************************************ 00:00:33.955 START TEST ubsan 00:00:33.955 ************************************ 00:00:33.955 15:43:13 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:33.955 using ubsan 00:00:33.955 00:00:33.955 real 0m0.000s 00:00:33.955 user 0m0.000s 00:00:33.955 sys 0m0.000s 00:00:33.955 15:43:13 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:33.955 15:43:13 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.955 ************************************ 00:00:33.955 END TEST ubsan 00:00:33.955 ************************************ 00:00:33.955 15:43:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:33.955 15:43:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:33.955 15:43:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:33.955 15:43:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:33.955 15:43:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:33.955 15:43:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:33.955 15:43:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:33.955 15:43:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:33.955 15:43:13 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:34.213 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:34.213 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:34.472 Using 'verbs' RDMA provider 00:00:47.250 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:59.467 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:59.467 Creating mk/config.mk...done. 00:00:59.467 Creating mk/cc.flags.mk...done. 00:00:59.467 Type 'make' to build. 00:00:59.467 15:43:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:00:59.467 15:43:37 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:59.467 15:43:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:59.467 15:43:37 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.467 ************************************ 00:00:59.467 START TEST make 00:00:59.467 ************************************ 00:00:59.467 15:43:37 -- common/autotest_common.sh@1111 -- $ make -j96 00:00:59.467 make[1]: Nothing to be done for 'all'. 00:00:59.726 The Meson build system 00:00:59.726 Version: 1.3.1 00:00:59.726 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:59.726 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:59.726 Build type: native build 00:00:59.726 Project name: libvfio-user 00:00:59.726 Project version: 0.0.1 00:00:59.726 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:59.726 C linker for the host machine: cc ld.bfd 2.39-16 00:00:59.726 Host machine cpu family: x86_64 00:00:59.726 Host machine cpu: x86_64 00:00:59.726 Run-time dependency threads found: YES 00:00:59.726 Library dl found: YES 00:00:59.726 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:59.726 Run-time dependency json-c found: YES 0.17 00:00:59.726 Run-time dependency cmocka found: YES 1.1.7 00:00:59.726 Program pytest-3 found: NO 00:00:59.726 Program flake8 found: NO 00:00:59.726 Program misspell-fixer found: NO 00:00:59.726 Program restructuredtext-lint found: NO 00:00:59.726 Program valgrind found: YES (/usr/bin/valgrind) 00:00:59.726 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:59.726 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:59.726 Compiler for C supports arguments -Wwrite-strings: YES 00:00:59.726 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:59.726 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:59.726 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:59.726 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:59.726 Build targets in project: 8 00:00:59.726 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:59.726 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:59.726 00:00:59.726 libvfio-user 0.0.1 00:00:59.726 00:00:59.726 User defined options 00:00:59.726 buildtype : debug 00:00:59.726 default_library: shared 00:00:59.726 libdir : /usr/local/lib 00:00:59.726 00:00:59.726 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:00.294 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:00.294 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:00.294 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:00.294 [3/37] Compiling C object samples/null.p/null.c.o 00:01:00.294 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:00.294 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:00.294 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:00.294 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:00.294 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:00.294 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:00.294 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:00.294 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:00.294 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:00.294 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:00.294 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:00.294 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:00.294 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:00.294 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:00.294 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:00.294 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:00.294 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:00.294 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:00.294 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:00.294 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:00.294 [24/37] Compiling C object samples/server.p/server.c.o 00:01:00.294 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:00.294 [26/37] Compiling C object samples/client.p/client.c.o 00:01:00.294 [27/37] Linking target samples/client 00:01:00.294 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:00.552 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:00.552 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:00.552 [31/37] Linking target test/unit_tests 00:01:00.552 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:00.552 [33/37] Linking target samples/server 00:01:00.552 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:00.552 [35/37] Linking target samples/null 00:01:00.552 [36/37] Linking target samples/lspci 00:01:00.552 [37/37] Linking target samples/gpio-pci-idio-16 00:01:00.552 INFO: autodetecting backend as ninja 00:01:00.552 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:00.810 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:01.069 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:01.069 ninja: no work to do. 00:01:06.342 The Meson build system 00:01:06.342 Version: 1.3.1 00:01:06.342 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:06.342 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:06.342 Build type: native build 00:01:06.342 Program cat found: YES (/usr/bin/cat) 00:01:06.342 Project name: DPDK 00:01:06.342 Project version: 23.11.0 00:01:06.342 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.342 C linker for the host machine: cc ld.bfd 2.39-16 00:01:06.342 Host machine cpu family: x86_64 00:01:06.342 Host machine cpu: x86_64 00:01:06.342 Message: ## Building in Developer Mode ## 00:01:06.342 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.342 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:06.342 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.342 Program python3 found: YES (/usr/bin/python3) 00:01:06.342 Program cat found: YES (/usr/bin/cat) 00:01:06.342 Compiler for C supports arguments -march=native: YES 00:01:06.342 Checking for size of "void *" : 8 00:01:06.342 Checking for size of "void *" : 8 (cached) 00:01:06.342 Library m found: YES 00:01:06.342 Library numa found: YES 00:01:06.342 Has header "numaif.h" : YES 00:01:06.342 Library fdt found: NO 00:01:06.342 Library execinfo found: NO 00:01:06.342 Has header "execinfo.h" : YES 00:01:06.342 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.342 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.342 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.342 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.342 Run-time dependency openssl found: YES 3.0.9 00:01:06.342 Run-time dependency libpcap found: YES 1.10.4 00:01:06.342 Has header "pcap.h" with dependency libpcap: YES 00:01:06.342 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.342 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.342 Compiler for C supports arguments -Wformat: YES 00:01:06.342 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.342 Compiler for C supports arguments -Wformat-security: NO 00:01:06.342 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.343 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.343 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.343 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.343 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.343 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.343 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.343 Compiler for C supports arguments -Wundef: YES 00:01:06.343 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.343 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.343 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.343 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.343 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.343 Program objdump found: YES (/usr/bin/objdump) 00:01:06.343 Compiler for C supports arguments -mavx512f: YES 00:01:06.343 Checking if "AVX512 checking" compiles: YES 00:01:06.343 Fetching value of define "__SSE4_2__" : 1 00:01:06.343 Fetching value of define "__AES__" : 1 00:01:06.343 Fetching value of define "__AVX__" : 1 00:01:06.343 Fetching value of define "__AVX2__" : 1 00:01:06.343 Fetching value of define "__AVX512BW__" : 1 00:01:06.343 Fetching value of define "__AVX512CD__" : 1 00:01:06.343 Fetching value of define "__AVX512DQ__" : 1 00:01:06.343 Fetching value of define "__AVX512F__" : 1 00:01:06.343 Fetching value of define "__AVX512VL__" : 1 00:01:06.343 Fetching value of define "__PCLMUL__" : 1 00:01:06.343 Fetching value of define "__RDRND__" : 1 00:01:06.343 Fetching value of define "__RDSEED__" : 1 00:01:06.343 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.343 Fetching value of define "__znver1__" : (undefined) 00:01:06.343 Fetching value of define "__znver2__" : (undefined) 00:01:06.343 Fetching value of define "__znver3__" : (undefined) 00:01:06.343 Fetching value of define "__znver4__" : (undefined) 00:01:06.343 Library asan found: YES 00:01:06.343 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.343 Message: lib/log: Defining dependency "log" 00:01:06.343 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.343 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.343 Library rt found: YES 00:01:06.343 Checking for function "getentropy" : NO 00:01:06.343 Message: lib/eal: Defining dependency "eal" 00:01:06.343 Message: lib/ring: Defining dependency "ring" 00:01:06.343 Message: lib/rcu: Defining dependency "rcu" 00:01:06.343 Message: lib/mempool: Defining dependency "mempool" 00:01:06.343 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.343 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.343 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.343 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:06.343 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:06.343 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:06.343 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:06.343 Compiler for C supports arguments -mpclmul: YES 00:01:06.343 Compiler for C supports arguments -maes: YES 00:01:06.343 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.343 Compiler for C supports arguments -mavx512bw: YES 00:01:06.343 Compiler for C supports arguments -mavx512dq: YES 00:01:06.343 Compiler for C supports arguments -mavx512vl: YES 00:01:06.343 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.343 Compiler for C supports arguments -mavx2: YES 00:01:06.343 Compiler for C supports arguments -mavx: YES 00:01:06.343 Message: lib/net: Defining dependency "net" 00:01:06.343 Message: lib/meter: Defining dependency "meter" 00:01:06.343 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.343 Message: lib/pci: Defining dependency "pci" 00:01:06.343 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.343 Message: lib/hash: Defining dependency "hash" 00:01:06.343 Message: lib/timer: Defining dependency "timer" 00:01:06.343 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.343 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.343 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.343 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.343 Message: lib/power: Defining dependency "power" 00:01:06.343 Message: lib/reorder: Defining dependency "reorder" 00:01:06.343 Message: lib/security: Defining dependency "security" 00:01:06.343 Has header "linux/userfaultfd.h" : YES 00:01:06.343 Has header "linux/vduse.h" : YES 00:01:06.343 Message: lib/vhost: Defining dependency "vhost" 00:01:06.343 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.343 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.343 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.343 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.343 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:06.343 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:06.343 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:06.343 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:06.343 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:06.343 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:06.343 Program doxygen found: YES (/usr/bin/doxygen) 00:01:06.343 Configuring doxy-api-html.conf using configuration 00:01:06.343 Configuring doxy-api-man.conf using configuration 00:01:06.343 Program mandb found: YES (/usr/bin/mandb) 00:01:06.343 Program sphinx-build found: NO 00:01:06.343 Configuring rte_build_config.h using configuration 00:01:06.343 Message: 00:01:06.343 ================= 00:01:06.343 Applications Enabled 00:01:06.343 ================= 00:01:06.343 00:01:06.343 apps: 00:01:06.343 00:01:06.343 00:01:06.343 Message: 00:01:06.343 ================= 00:01:06.343 Libraries Enabled 00:01:06.343 ================= 00:01:06.343 00:01:06.343 libs: 00:01:06.343 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:06.343 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:06.343 cryptodev, dmadev, power, reorder, security, vhost, 00:01:06.343 00:01:06.343 Message: 00:01:06.343 =============== 00:01:06.343 Drivers Enabled 00:01:06.343 =============== 00:01:06.343 00:01:06.343 common: 00:01:06.343 00:01:06.343 bus: 00:01:06.343 pci, vdev, 00:01:06.343 mempool: 00:01:06.343 ring, 00:01:06.343 dma: 00:01:06.343 00:01:06.343 net: 00:01:06.343 00:01:06.343 crypto: 00:01:06.343 00:01:06.343 compress: 00:01:06.343 00:01:06.343 vdpa: 00:01:06.343 00:01:06.343 00:01:06.343 Message: 00:01:06.343 ================= 00:01:06.343 Content Skipped 00:01:06.343 ================= 00:01:06.343 00:01:06.343 apps: 00:01:06.343 dumpcap: explicitly disabled via build config 00:01:06.343 graph: explicitly disabled via build config 00:01:06.343 pdump: explicitly disabled via build config 00:01:06.343 proc-info: explicitly disabled via build config 00:01:06.343 test-acl: explicitly disabled via build config 00:01:06.343 test-bbdev: explicitly disabled via build config 00:01:06.343 test-cmdline: explicitly disabled via build config 00:01:06.343 test-compress-perf: explicitly disabled via build config 00:01:06.343 test-crypto-perf: explicitly disabled via build config 00:01:06.343 test-dma-perf: explicitly disabled via build config 00:01:06.343 test-eventdev: explicitly disabled via build config 00:01:06.343 test-fib: explicitly disabled via build config 00:01:06.343 test-flow-perf: explicitly disabled via build config 00:01:06.343 test-gpudev: explicitly disabled via build config 00:01:06.343 test-mldev: explicitly disabled via build config 00:01:06.343 test-pipeline: explicitly disabled via build config 00:01:06.343 test-pmd: explicitly disabled via build config 00:01:06.343 test-regex: explicitly disabled via build config 00:01:06.343 test-sad: explicitly disabled via build config 00:01:06.343 test-security-perf: explicitly disabled via build config 00:01:06.343 00:01:06.343 libs: 00:01:06.343 metrics: explicitly disabled via build config 00:01:06.343 acl: explicitly disabled via build config 00:01:06.343 bbdev: explicitly disabled via build config 00:01:06.343 bitratestats: explicitly disabled via build config 00:01:06.343 bpf: explicitly disabled via build config 00:01:06.343 cfgfile: explicitly disabled via build config 00:01:06.343 distributor: explicitly disabled via build config 00:01:06.343 efd: explicitly disabled via build config 00:01:06.343 eventdev: explicitly disabled via build config 00:01:06.343 dispatcher: explicitly disabled via build config 00:01:06.343 gpudev: explicitly disabled via build config 00:01:06.343 gro: explicitly disabled via build config 00:01:06.343 gso: explicitly disabled via build config 00:01:06.343 ip_frag: explicitly disabled via build config 00:01:06.343 jobstats: explicitly disabled via build config 00:01:06.343 latencystats: explicitly disabled via build config 00:01:06.343 lpm: explicitly disabled via build config 00:01:06.343 member: explicitly disabled via build config 00:01:06.343 pcapng: explicitly disabled via build config 00:01:06.343 rawdev: explicitly disabled via build config 00:01:06.343 regexdev: explicitly disabled via build config 00:01:06.343 mldev: explicitly disabled via build config 00:01:06.343 rib: explicitly disabled via build config 00:01:06.343 sched: explicitly disabled via build config 00:01:06.343 stack: explicitly disabled via build config 00:01:06.343 ipsec: explicitly disabled via build config 00:01:06.343 pdcp: explicitly disabled via build config 00:01:06.343 fib: explicitly disabled via build config 00:01:06.343 port: explicitly disabled via build config 00:01:06.343 pdump: explicitly disabled via build config 00:01:06.343 table: explicitly disabled via build config 00:01:06.343 pipeline: explicitly disabled via build config 00:01:06.343 graph: explicitly disabled via build config 00:01:06.343 node: explicitly disabled via build config 00:01:06.343 00:01:06.343 drivers: 00:01:06.343 common/cpt: not in enabled drivers build config 00:01:06.343 common/dpaax: not in enabled drivers build config 00:01:06.343 common/iavf: not in enabled drivers build config 00:01:06.343 common/idpf: not in enabled drivers build config 00:01:06.343 common/mvep: not in enabled drivers build config 00:01:06.343 common/octeontx: not in enabled drivers build config 00:01:06.343 bus/auxiliary: not in enabled drivers build config 00:01:06.344 bus/cdx: not in enabled drivers build config 00:01:06.344 bus/dpaa: not in enabled drivers build config 00:01:06.344 bus/fslmc: not in enabled drivers build config 00:01:06.344 bus/ifpga: not in enabled drivers build config 00:01:06.344 bus/platform: not in enabled drivers build config 00:01:06.344 bus/vmbus: not in enabled drivers build config 00:01:06.344 common/cnxk: not in enabled drivers build config 00:01:06.344 common/mlx5: not in enabled drivers build config 00:01:06.344 common/nfp: not in enabled drivers build config 00:01:06.344 common/qat: not in enabled drivers build config 00:01:06.344 common/sfc_efx: not in enabled drivers build config 00:01:06.344 mempool/bucket: not in enabled drivers build config 00:01:06.344 mempool/cnxk: not in enabled drivers build config 00:01:06.344 mempool/dpaa: not in enabled drivers build config 00:01:06.344 mempool/dpaa2: not in enabled drivers build config 00:01:06.344 mempool/octeontx: not in enabled drivers build config 00:01:06.344 mempool/stack: not in enabled drivers build config 00:01:06.344 dma/cnxk: not in enabled drivers build config 00:01:06.344 dma/dpaa: not in enabled drivers build config 00:01:06.344 dma/dpaa2: not in enabled drivers build config 00:01:06.344 dma/hisilicon: not in enabled drivers build config 00:01:06.344 dma/idxd: not in enabled drivers build config 00:01:06.344 dma/ioat: not in enabled drivers build config 00:01:06.344 dma/skeleton: not in enabled drivers build config 00:01:06.344 net/af_packet: not in enabled drivers build config 00:01:06.344 net/af_xdp: not in enabled drivers build config 00:01:06.344 net/ark: not in enabled drivers build config 00:01:06.344 net/atlantic: not in enabled drivers build config 00:01:06.344 net/avp: not in enabled drivers build config 00:01:06.344 net/axgbe: not in enabled drivers build config 00:01:06.344 net/bnx2x: not in enabled drivers build config 00:01:06.344 net/bnxt: not in enabled drivers build config 00:01:06.344 net/bonding: not in enabled drivers build config 00:01:06.344 net/cnxk: not in enabled drivers build config 00:01:06.344 net/cpfl: not in enabled drivers build config 00:01:06.344 net/cxgbe: not in enabled drivers build config 00:01:06.344 net/dpaa: not in enabled drivers build config 00:01:06.344 net/dpaa2: not in enabled drivers build config 00:01:06.344 net/e1000: not in enabled drivers build config 00:01:06.344 net/ena: not in enabled drivers build config 00:01:06.344 net/enetc: not in enabled drivers build config 00:01:06.344 net/enetfec: not in enabled drivers build config 00:01:06.344 net/enic: not in enabled drivers build config 00:01:06.344 net/failsafe: not in enabled drivers build config 00:01:06.344 net/fm10k: not in enabled drivers build config 00:01:06.344 net/gve: not in enabled drivers build config 00:01:06.344 net/hinic: not in enabled drivers build config 00:01:06.344 net/hns3: not in enabled drivers build config 00:01:06.344 net/i40e: not in enabled drivers build config 00:01:06.344 net/iavf: not in enabled drivers build config 00:01:06.344 net/ice: not in enabled drivers build config 00:01:06.344 net/idpf: not in enabled drivers build config 00:01:06.344 net/igc: not in enabled drivers build config 00:01:06.344 net/ionic: not in enabled drivers build config 00:01:06.344 net/ipn3ke: not in enabled drivers build config 00:01:06.344 net/ixgbe: not in enabled drivers build config 00:01:06.344 net/mana: not in enabled drivers build config 00:01:06.344 net/memif: not in enabled drivers build config 00:01:06.344 net/mlx4: not in enabled drivers build config 00:01:06.344 net/mlx5: not in enabled drivers build config 00:01:06.344 net/mvneta: not in enabled drivers build config 00:01:06.344 net/mvpp2: not in enabled drivers build config 00:01:06.344 net/netvsc: not in enabled drivers build config 00:01:06.344 net/nfb: not in enabled drivers build config 00:01:06.344 net/nfp: not in enabled drivers build config 00:01:06.344 net/ngbe: not in enabled drivers build config 00:01:06.344 net/null: not in enabled drivers build config 00:01:06.344 net/octeontx: not in enabled drivers build config 00:01:06.344 net/octeon_ep: not in enabled drivers build config 00:01:06.344 net/pcap: not in enabled drivers build config 00:01:06.344 net/pfe: not in enabled drivers build config 00:01:06.344 net/qede: not in enabled drivers build config 00:01:06.344 net/ring: not in enabled drivers build config 00:01:06.344 net/sfc: not in enabled drivers build config 00:01:06.344 net/softnic: not in enabled drivers build config 00:01:06.344 net/tap: not in enabled drivers build config 00:01:06.344 net/thunderx: not in enabled drivers build config 00:01:06.344 net/txgbe: not in enabled drivers build config 00:01:06.344 net/vdev_netvsc: not in enabled drivers build config 00:01:06.344 net/vhost: not in enabled drivers build config 00:01:06.344 net/virtio: not in enabled drivers build config 00:01:06.344 net/vmxnet3: not in enabled drivers build config 00:01:06.344 raw/*: missing internal dependency, "rawdev" 00:01:06.344 crypto/armv8: not in enabled drivers build config 00:01:06.344 crypto/bcmfs: not in enabled drivers build config 00:01:06.344 crypto/caam_jr: not in enabled drivers build config 00:01:06.344 crypto/ccp: not in enabled drivers build config 00:01:06.344 crypto/cnxk: not in enabled drivers build config 00:01:06.344 crypto/dpaa_sec: not in enabled drivers build config 00:01:06.344 crypto/dpaa2_sec: not in enabled drivers build config 00:01:06.344 crypto/ipsec_mb: not in enabled drivers build config 00:01:06.344 crypto/mlx5: not in enabled drivers build config 00:01:06.344 crypto/mvsam: not in enabled drivers build config 00:01:06.344 crypto/nitrox: not in enabled drivers build config 00:01:06.344 crypto/null: not in enabled drivers build config 00:01:06.344 crypto/octeontx: not in enabled drivers build config 00:01:06.344 crypto/openssl: not in enabled drivers build config 00:01:06.344 crypto/scheduler: not in enabled drivers build config 00:01:06.344 crypto/uadk: not in enabled drivers build config 00:01:06.344 crypto/virtio: not in enabled drivers build config 00:01:06.344 compress/isal: not in enabled drivers build config 00:01:06.344 compress/mlx5: not in enabled drivers build config 00:01:06.344 compress/octeontx: not in enabled drivers build config 00:01:06.344 compress/zlib: not in enabled drivers build config 00:01:06.344 regex/*: missing internal dependency, "regexdev" 00:01:06.344 ml/*: missing internal dependency, "mldev" 00:01:06.344 vdpa/ifc: not in enabled drivers build config 00:01:06.344 vdpa/mlx5: not in enabled drivers build config 00:01:06.344 vdpa/nfp: not in enabled drivers build config 00:01:06.344 vdpa/sfc: not in enabled drivers build config 00:01:06.344 event/*: missing internal dependency, "eventdev" 00:01:06.344 baseband/*: missing internal dependency, "bbdev" 00:01:06.344 gpu/*: missing internal dependency, "gpudev" 00:01:06.344 00:01:06.344 00:01:06.344 Build targets in project: 85 00:01:06.344 00:01:06.344 DPDK 23.11.0 00:01:06.344 00:01:06.344 User defined options 00:01:06.344 buildtype : debug 00:01:06.344 default_library : shared 00:01:06.344 libdir : lib 00:01:06.344 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.344 b_sanitize : address 00:01:06.344 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:06.344 c_link_args : 00:01:06.344 cpu_instruction_set: native 00:01:06.344 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:06.344 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:06.344 enable_docs : false 00:01:06.344 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:06.344 enable_kmods : false 00:01:06.344 tests : false 00:01:06.344 00:01:06.344 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.604 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:06.868 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:06.868 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:06.868 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:06.868 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:06.868 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:06.868 [6/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:06.868 [7/265] Linking static target lib/librte_kvargs.a 00:01:06.868 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:06.868 [9/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:06.868 [10/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:06.868 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:06.868 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:06.868 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:06.868 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:06.868 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:06.868 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:06.868 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:06.868 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:06.868 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.128 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:07.128 [21/265] Linking static target lib/librte_log.a 00:01:07.128 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:07.128 [23/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:07.128 [24/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.128 [25/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:07.128 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:07.128 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:07.128 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:07.128 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:07.128 [30/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:07.128 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:07.128 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:07.128 [33/265] Linking static target lib/librte_pci.a 00:01:07.128 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:07.128 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:07.128 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:07.129 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:07.388 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:07.388 [39/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:07.388 [40/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:07.388 [41/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:07.388 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:07.388 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:07.388 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:07.388 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:07.388 [46/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:07.388 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:07.388 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:07.388 [49/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.388 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:07.388 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:07.388 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:07.388 [53/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:07.388 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:07.388 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:07.388 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:07.388 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:07.388 [58/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:07.388 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:07.388 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:07.388 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:07.388 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:07.388 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:07.388 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:07.388 [65/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:07.388 [66/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:07.388 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:07.388 [68/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:07.388 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:07.388 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:07.388 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:07.388 [72/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:07.388 [73/265] Linking static target lib/librte_telemetry.a 00:01:07.388 [74/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:07.388 [75/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:07.388 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:07.388 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:07.388 [78/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:07.388 [79/265] Linking static target lib/librte_meter.a 00:01:07.388 [80/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:07.388 [81/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:07.388 [82/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:07.388 [83/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:07.388 [84/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:07.388 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:07.388 [86/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.388 [87/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:07.388 [88/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:07.388 [89/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:07.388 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:07.647 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:07.647 [92/265] Linking static target lib/librte_ring.a 00:01:07.647 [93/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:07.647 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:07.647 [95/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:07.647 [96/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.647 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:07.647 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:07.647 [99/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:07.647 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:07.647 [101/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.647 [102/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:07.647 [103/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:07.647 [104/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.647 [105/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:07.647 [106/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:07.647 [107/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:07.647 [108/265] Linking static target lib/librte_cmdline.a 00:01:07.647 [109/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:07.647 [110/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:07.647 [111/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:07.647 [112/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:07.647 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:07.647 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:07.647 [115/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:07.647 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:07.647 [117/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.647 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:07.647 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:07.647 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:07.647 [121/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:07.647 [122/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:07.647 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:07.647 [124/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:07.647 [125/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:07.647 [126/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.647 [127/265] Linking static target lib/librte_rcu.a 00:01:07.647 [128/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:07.647 [129/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:07.647 [130/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:07.647 [131/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.647 [132/265] Linking target lib/librte_log.so.24.0 00:01:07.647 [133/265] Linking static target lib/librte_eal.a 00:01:07.647 [134/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:07.647 [135/265] Linking static target lib/librte_timer.a 00:01:07.647 [136/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:07.906 [137/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:07.906 [138/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.906 [139/265] Linking static target lib/librte_mempool.a 00:01:07.906 [140/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:07.906 [141/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:07.906 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:07.906 [143/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:07.906 [144/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:07.906 [145/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:07.906 [146/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:07.906 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:07.906 [148/265] Linking static target lib/librte_net.a 00:01:07.906 [149/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:07.906 [150/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:07.906 [151/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:07.906 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:07.906 [153/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:07.906 [154/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:07.906 [155/265] Linking static target lib/librte_dmadev.a 00:01:07.906 [156/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:07.906 [157/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:07.906 [158/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:07.906 [159/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.906 [160/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:07.906 [161/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:07.906 [162/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:07.906 [163/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:07.906 [164/265] Linking target lib/librte_kvargs.so.24.0 00:01:07.906 [165/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:07.906 [166/265] Linking target lib/librte_telemetry.so.24.0 00:01:07.906 [167/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:07.906 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:07.906 [169/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:07.906 [170/265] Linking static target lib/librte_power.a 00:01:07.906 [171/265] Linking static target lib/librte_compressdev.a 00:01:07.906 [172/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.906 [173/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:07.906 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:07.906 [175/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:08.165 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:08.165 [177/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:08.165 [178/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:08.165 [179/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:08.165 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:08.165 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:08.165 [182/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.165 [183/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:08.165 [184/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:08.165 [185/265] Linking static target drivers/librte_bus_vdev.a 00:01:08.165 [186/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.165 [187/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.165 [188/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.165 [189/265] Linking static target lib/librte_mbuf.a 00:01:08.165 [190/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:08.165 [191/265] Linking static target lib/librte_reorder.a 00:01:08.165 [192/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:08.165 [193/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:08.165 [194/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:08.165 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:08.165 [196/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.165 [197/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:08.165 [198/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:08.165 [199/265] Linking static target drivers/librte_bus_pci.a 00:01:08.424 [200/265] Linking static target lib/librte_security.a 00:01:08.424 [201/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.424 [202/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:08.424 [203/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:08.424 [204/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.424 [205/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:08.424 [206/265] Linking static target drivers/librte_mempool_ring.a 00:01:08.424 [207/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.424 [208/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:08.424 [209/265] Linking static target lib/librte_hash.a 00:01:08.683 [210/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.683 [211/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.683 [212/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.683 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.683 [214/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:08.683 [215/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:08.683 [216/265] Linking static target lib/librte_cryptodev.a 00:01:08.683 [217/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.942 [218/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.942 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.942 [220/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.200 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.200 [222/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:09.200 [223/265] Linking static target lib/librte_ethdev.a 00:01:10.137 [224/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:10.396 [225/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.680 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:13.680 [227/265] Linking static target lib/librte_vhost.a 00:01:15.052 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.427 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.365 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.365 [231/265] Linking target lib/librte_eal.so.24.0 00:01:17.624 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:17.624 [233/265] Linking target lib/librte_meter.so.24.0 00:01:17.624 [234/265] Linking target lib/librte_ring.so.24.0 00:01:17.624 [235/265] Linking target lib/librte_timer.so.24.0 00:01:17.624 [236/265] Linking target lib/librte_pci.so.24.0 00:01:17.624 [237/265] Linking target lib/librte_dmadev.so.24.0 00:01:17.624 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:17.624 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:17.624 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:17.624 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:17.624 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:17.624 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:17.625 [244/265] Linking target lib/librte_mempool.so.24.0 00:01:17.883 [245/265] Linking target lib/librte_rcu.so.24.0 00:01:17.883 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:17.883 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:17.883 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:17.883 [249/265] Linking target lib/librte_mbuf.so.24.0 00:01:17.883 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:18.140 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:18.140 [252/265] Linking target lib/librte_net.so.24.0 00:01:18.140 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:18.140 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:01:18.140 [255/265] Linking target lib/librte_reorder.so.24.0 00:01:18.140 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:18.140 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:18.140 [258/265] Linking target lib/librte_cmdline.so.24.0 00:01:18.140 [259/265] Linking target lib/librte_hash.so.24.0 00:01:18.399 [260/265] Linking target lib/librte_ethdev.so.24.0 00:01:18.399 [261/265] Linking target lib/librte_security.so.24.0 00:01:18.399 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:18.399 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:18.399 [264/265] Linking target lib/librte_power.so.24.0 00:01:18.399 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:18.399 INFO: autodetecting backend as ninja 00:01:18.399 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:19.334 CC lib/ut/ut.o 00:01:19.334 CC lib/ut_mock/mock.o 00:01:19.334 CC lib/log/log.o 00:01:19.334 CC lib/log/log_flags.o 00:01:19.334 CC lib/log/log_deprecated.o 00:01:19.592 LIB libspdk_ut.a 00:01:19.592 LIB libspdk_ut_mock.a 00:01:19.592 SO libspdk_ut.so.2.0 00:01:19.592 SO libspdk_ut_mock.so.6.0 00:01:19.592 LIB libspdk_log.a 00:01:19.592 SYMLINK libspdk_ut.so 00:01:19.592 SO libspdk_log.so.7.0 00:01:19.592 SYMLINK libspdk_ut_mock.so 00:01:19.592 SYMLINK libspdk_log.so 00:01:19.850 CXX lib/trace_parser/trace.o 00:01:20.109 CC lib/util/base64.o 00:01:20.109 CC lib/util/bit_array.o 00:01:20.109 CC lib/util/cpuset.o 00:01:20.109 CC lib/util/crc16.o 00:01:20.109 CC lib/util/crc32.o 00:01:20.109 CC lib/util/crc64.o 00:01:20.109 CC lib/util/crc32c.o 00:01:20.109 CC lib/dma/dma.o 00:01:20.109 CC lib/util/crc32_ieee.o 00:01:20.109 CC lib/util/dif.o 00:01:20.109 CC lib/util/fd.o 00:01:20.109 CC lib/util/hexlify.o 00:01:20.109 CC lib/ioat/ioat.o 00:01:20.109 CC lib/util/file.o 00:01:20.109 CC lib/util/iov.o 00:01:20.109 CC lib/util/math.o 00:01:20.109 CC lib/util/pipe.o 00:01:20.109 CC lib/util/strerror_tls.o 00:01:20.109 CC lib/util/string.o 00:01:20.109 CC lib/util/uuid.o 00:01:20.109 CC lib/util/fd_group.o 00:01:20.109 CC lib/util/xor.o 00:01:20.109 CC lib/util/zipf.o 00:01:20.109 CC lib/vfio_user/host/vfio_user.o 00:01:20.109 CC lib/vfio_user/host/vfio_user_pci.o 00:01:20.109 LIB libspdk_dma.a 00:01:20.367 SO libspdk_dma.so.4.0 00:01:20.367 LIB libspdk_ioat.a 00:01:20.367 SYMLINK libspdk_dma.so 00:01:20.367 SO libspdk_ioat.so.7.0 00:01:20.367 SYMLINK libspdk_ioat.so 00:01:20.367 LIB libspdk_vfio_user.a 00:01:20.367 SO libspdk_vfio_user.so.5.0 00:01:20.625 SYMLINK libspdk_vfio_user.so 00:01:20.625 LIB libspdk_util.a 00:01:20.625 SO libspdk_util.so.9.0 00:01:20.625 LIB libspdk_trace_parser.a 00:01:20.625 SYMLINK libspdk_util.so 00:01:20.625 SO libspdk_trace_parser.so.5.0 00:01:20.882 SYMLINK libspdk_trace_parser.so 00:01:20.882 CC lib/vmd/led.o 00:01:20.882 CC lib/vmd/vmd.o 00:01:21.140 CC lib/env_dpdk/env.o 00:01:21.140 CC lib/env_dpdk/memory.o 00:01:21.140 CC lib/env_dpdk/pci.o 00:01:21.140 CC lib/env_dpdk/init.o 00:01:21.140 CC lib/env_dpdk/threads.o 00:01:21.140 CC lib/json/json_parse.o 00:01:21.140 CC lib/json/json_util.o 00:01:21.140 CC lib/env_dpdk/pci_ioat.o 00:01:21.140 CC lib/env_dpdk/pci_virtio.o 00:01:21.140 CC lib/json/json_write.o 00:01:21.140 CC lib/env_dpdk/pci_vmd.o 00:01:21.140 CC lib/rdma/common.o 00:01:21.140 CC lib/env_dpdk/pci_idxd.o 00:01:21.140 CC lib/rdma/rdma_verbs.o 00:01:21.140 CC lib/env_dpdk/pci_event.o 00:01:21.140 CC lib/env_dpdk/sigbus_handler.o 00:01:21.140 CC lib/env_dpdk/pci_dpdk.o 00:01:21.140 CC lib/idxd/idxd.o 00:01:21.140 CC lib/idxd/idxd_user.o 00:01:21.140 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:21.140 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:21.140 CC lib/conf/conf.o 00:01:21.398 LIB libspdk_conf.a 00:01:21.398 LIB libspdk_json.a 00:01:21.398 SO libspdk_conf.so.6.0 00:01:21.398 LIB libspdk_rdma.a 00:01:21.398 SO libspdk_rdma.so.6.0 00:01:21.398 SO libspdk_json.so.6.0 00:01:21.398 SYMLINK libspdk_conf.so 00:01:21.398 SYMLINK libspdk_json.so 00:01:21.398 SYMLINK libspdk_rdma.so 00:01:21.657 LIB libspdk_idxd.a 00:01:21.657 LIB libspdk_vmd.a 00:01:21.657 SO libspdk_idxd.so.12.0 00:01:21.657 SO libspdk_vmd.so.6.0 00:01:21.657 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:21.657 CC lib/jsonrpc/jsonrpc_server.o 00:01:21.657 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:21.657 CC lib/jsonrpc/jsonrpc_client.o 00:01:21.657 SYMLINK libspdk_vmd.so 00:01:21.657 SYMLINK libspdk_idxd.so 00:01:21.914 LIB libspdk_jsonrpc.a 00:01:21.914 SO libspdk_jsonrpc.so.6.0 00:01:21.914 SYMLINK libspdk_jsonrpc.so 00:01:22.480 CC lib/rpc/rpc.o 00:01:22.480 LIB libspdk_env_dpdk.a 00:01:22.480 SO libspdk_env_dpdk.so.14.0 00:01:22.480 LIB libspdk_rpc.a 00:01:22.480 SYMLINK libspdk_env_dpdk.so 00:01:22.480 SO libspdk_rpc.so.6.0 00:01:22.480 SYMLINK libspdk_rpc.so 00:01:22.737 CC lib/trace/trace.o 00:01:22.737 CC lib/trace/trace_flags.o 00:01:22.737 CC lib/trace/trace_rpc.o 00:01:22.996 CC lib/keyring/keyring.o 00:01:22.996 CC lib/keyring/keyring_rpc.o 00:01:22.996 CC lib/notify/notify.o 00:01:22.996 CC lib/notify/notify_rpc.o 00:01:22.996 LIB libspdk_notify.a 00:01:22.996 LIB libspdk_trace.a 00:01:22.996 SO libspdk_notify.so.6.0 00:01:22.996 LIB libspdk_keyring.a 00:01:22.996 SO libspdk_trace.so.10.0 00:01:23.254 SO libspdk_keyring.so.1.0 00:01:23.254 SYMLINK libspdk_notify.so 00:01:23.254 SYMLINK libspdk_trace.so 00:01:23.254 SYMLINK libspdk_keyring.so 00:01:23.512 CC lib/thread/thread.o 00:01:23.512 CC lib/thread/iobuf.o 00:01:23.512 CC lib/sock/sock.o 00:01:23.512 CC lib/sock/sock_rpc.o 00:01:23.772 LIB libspdk_sock.a 00:01:23.772 SO libspdk_sock.so.9.0 00:01:24.029 SYMLINK libspdk_sock.so 00:01:24.299 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:24.299 CC lib/nvme/nvme_ctrlr.o 00:01:24.299 CC lib/nvme/nvme_ns_cmd.o 00:01:24.299 CC lib/nvme/nvme_ns.o 00:01:24.299 CC lib/nvme/nvme_fabric.o 00:01:24.299 CC lib/nvme/nvme_pcie_common.o 00:01:24.299 CC lib/nvme/nvme.o 00:01:24.299 CC lib/nvme/nvme_pcie.o 00:01:24.299 CC lib/nvme/nvme_qpair.o 00:01:24.299 CC lib/nvme/nvme_transport.o 00:01:24.299 CC lib/nvme/nvme_quirks.o 00:01:24.299 CC lib/nvme/nvme_discovery.o 00:01:24.299 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:24.299 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:24.299 CC lib/nvme/nvme_tcp.o 00:01:24.299 CC lib/nvme/nvme_opal.o 00:01:24.299 CC lib/nvme/nvme_io_msg.o 00:01:24.299 CC lib/nvme/nvme_poll_group.o 00:01:24.299 CC lib/nvme/nvme_zns.o 00:01:24.299 CC lib/nvme/nvme_stubs.o 00:01:24.299 CC lib/nvme/nvme_auth.o 00:01:24.299 CC lib/nvme/nvme_cuse.o 00:01:24.299 CC lib/nvme/nvme_vfio_user.o 00:01:24.299 CC lib/nvme/nvme_rdma.o 00:01:24.947 LIB libspdk_thread.a 00:01:24.947 SO libspdk_thread.so.10.0 00:01:24.947 SYMLINK libspdk_thread.so 00:01:25.205 CC lib/virtio/virtio.o 00:01:25.205 CC lib/virtio/virtio_vhost_user.o 00:01:25.205 CC lib/virtio/virtio_vfio_user.o 00:01:25.205 CC lib/virtio/virtio_pci.o 00:01:25.205 CC lib/vfu_tgt/tgt_endpoint.o 00:01:25.205 CC lib/vfu_tgt/tgt_rpc.o 00:01:25.205 CC lib/blob/blobstore.o 00:01:25.205 CC lib/blob/request.o 00:01:25.205 CC lib/blob/blob_bs_dev.o 00:01:25.205 CC lib/blob/zeroes.o 00:01:25.205 CC lib/init/json_config.o 00:01:25.205 CC lib/init/subsystem.o 00:01:25.205 CC lib/accel/accel.o 00:01:25.205 CC lib/init/subsystem_rpc.o 00:01:25.205 CC lib/accel/accel_rpc.o 00:01:25.205 CC lib/init/rpc.o 00:01:25.205 CC lib/accel/accel_sw.o 00:01:25.464 LIB libspdk_init.a 00:01:25.464 SO libspdk_init.so.5.0 00:01:25.464 LIB libspdk_vfu_tgt.a 00:01:25.464 LIB libspdk_virtio.a 00:01:25.464 SYMLINK libspdk_init.so 00:01:25.464 SO libspdk_vfu_tgt.so.3.0 00:01:25.464 SO libspdk_virtio.so.7.0 00:01:25.722 SYMLINK libspdk_vfu_tgt.so 00:01:25.722 SYMLINK libspdk_virtio.so 00:01:25.722 CC lib/event/app.o 00:01:25.722 CC lib/event/reactor.o 00:01:25.722 CC lib/event/log_rpc.o 00:01:25.722 CC lib/event/app_rpc.o 00:01:25.722 CC lib/event/scheduler_static.o 00:01:26.290 LIB libspdk_nvme.a 00:01:26.290 LIB libspdk_accel.a 00:01:26.290 LIB libspdk_event.a 00:01:26.290 SO libspdk_accel.so.15.0 00:01:26.290 SO libspdk_event.so.13.0 00:01:26.290 SO libspdk_nvme.so.13.0 00:01:26.290 SYMLINK libspdk_accel.so 00:01:26.290 SYMLINK libspdk_event.so 00:01:26.548 SYMLINK libspdk_nvme.so 00:01:26.548 CC lib/bdev/bdev.o 00:01:26.548 CC lib/bdev/bdev_rpc.o 00:01:26.548 CC lib/bdev/scsi_nvme.o 00:01:26.548 CC lib/bdev/bdev_zone.o 00:01:26.548 CC lib/bdev/part.o 00:01:27.924 LIB libspdk_blob.a 00:01:27.924 SO libspdk_blob.so.11.0 00:01:28.182 SYMLINK libspdk_blob.so 00:01:28.440 CC lib/lvol/lvol.o 00:01:28.440 CC lib/blobfs/blobfs.o 00:01:28.440 CC lib/blobfs/tree.o 00:01:29.005 LIB libspdk_bdev.a 00:01:29.005 SO libspdk_bdev.so.15.0 00:01:29.005 SYMLINK libspdk_bdev.so 00:01:29.005 LIB libspdk_blobfs.a 00:01:29.264 SO libspdk_blobfs.so.10.0 00:01:29.264 LIB libspdk_lvol.a 00:01:29.264 SO libspdk_lvol.so.10.0 00:01:29.264 SYMLINK libspdk_blobfs.so 00:01:29.264 SYMLINK libspdk_lvol.so 00:01:29.264 CC lib/ublk/ublk_rpc.o 00:01:29.264 CC lib/nvmf/ctrlr.o 00:01:29.264 CC lib/ublk/ublk.o 00:01:29.264 CC lib/nvmf/subsystem.o 00:01:29.264 CC lib/nvmf/ctrlr_discovery.o 00:01:29.264 CC lib/nvmf/ctrlr_bdev.o 00:01:29.264 CC lib/nvmf/nvmf.o 00:01:29.264 CC lib/nvmf/nvmf_rpc.o 00:01:29.264 CC lib/nvmf/transport.o 00:01:29.264 CC lib/nvmf/rdma.o 00:01:29.264 CC lib/nvmf/tcp.o 00:01:29.264 CC lib/nvmf/vfio_user.o 00:01:29.264 CC lib/scsi/dev.o 00:01:29.264 CC lib/scsi/lun.o 00:01:29.264 CC lib/ftl/ftl_core.o 00:01:29.264 CC lib/scsi/port.o 00:01:29.264 CC lib/ftl/ftl_init.o 00:01:29.264 CC lib/scsi/scsi.o 00:01:29.264 CC lib/ftl/ftl_layout.o 00:01:29.264 CC lib/ftl/ftl_debug.o 00:01:29.264 CC lib/scsi/scsi_bdev.o 00:01:29.264 CC lib/scsi/scsi_pr.o 00:01:29.264 CC lib/ftl/ftl_l2p.o 00:01:29.264 CC lib/ftl/ftl_io.o 00:01:29.264 CC lib/ftl/ftl_sb.o 00:01:29.264 CC lib/scsi/scsi_rpc.o 00:01:29.264 CC lib/nbd/nbd.o 00:01:29.264 CC lib/scsi/task.o 00:01:29.264 CC lib/ftl/ftl_l2p_flat.o 00:01:29.264 CC lib/ftl/ftl_nv_cache.o 00:01:29.264 CC lib/nbd/nbd_rpc.o 00:01:29.264 CC lib/ftl/ftl_band.o 00:01:29.264 CC lib/ftl/ftl_writer.o 00:01:29.264 CC lib/ftl/ftl_band_ops.o 00:01:29.264 CC lib/ftl/ftl_rq.o 00:01:29.264 CC lib/ftl/ftl_reloc.o 00:01:29.264 CC lib/ftl/ftl_l2p_cache.o 00:01:29.264 CC lib/ftl/ftl_p2l.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:29.264 CC lib/ftl/utils/ftl_conf.o 00:01:29.264 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:29.264 CC lib/ftl/utils/ftl_md.o 00:01:29.264 CC lib/ftl/utils/ftl_mempool.o 00:01:29.264 CC lib/ftl/utils/ftl_property.o 00:01:29.264 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:29.264 CC lib/ftl/utils/ftl_bitmap.o 00:01:29.522 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:29.522 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:29.522 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:29.522 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:29.522 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:29.522 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:29.522 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:29.522 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:29.522 CC lib/ftl/base/ftl_base_dev.o 00:01:29.522 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:29.522 CC lib/ftl/base/ftl_base_bdev.o 00:01:29.522 CC lib/ftl/ftl_trace.o 00:01:30.089 LIB libspdk_nbd.a 00:01:30.089 SO libspdk_nbd.so.7.0 00:01:30.089 LIB libspdk_scsi.a 00:01:30.089 LIB libspdk_ublk.a 00:01:30.089 SYMLINK libspdk_nbd.so 00:01:30.089 SO libspdk_ublk.so.3.0 00:01:30.089 SO libspdk_scsi.so.9.0 00:01:30.347 SYMLINK libspdk_ublk.so 00:01:30.347 SYMLINK libspdk_scsi.so 00:01:30.606 LIB libspdk_ftl.a 00:01:30.606 CC lib/iscsi/conn.o 00:01:30.606 CC lib/iscsi/init_grp.o 00:01:30.606 CC lib/iscsi/iscsi.o 00:01:30.606 CC lib/iscsi/md5.o 00:01:30.606 CC lib/iscsi/param.o 00:01:30.606 CC lib/iscsi/portal_grp.o 00:01:30.606 CC lib/iscsi/tgt_node.o 00:01:30.606 CC lib/vhost/vhost.o 00:01:30.606 CC lib/vhost/vhost_rpc.o 00:01:30.606 CC lib/vhost/vhost_scsi.o 00:01:30.606 CC lib/iscsi/iscsi_subsystem.o 00:01:30.606 CC lib/vhost/rte_vhost_user.o 00:01:30.606 CC lib/vhost/vhost_blk.o 00:01:30.606 CC lib/iscsi/iscsi_rpc.o 00:01:30.606 CC lib/iscsi/task.o 00:01:30.606 SO libspdk_ftl.so.9.0 00:01:30.864 SYMLINK libspdk_ftl.so 00:01:31.430 LIB libspdk_vhost.a 00:01:31.430 SO libspdk_vhost.so.8.0 00:01:31.430 LIB libspdk_nvmf.a 00:01:31.689 SO libspdk_nvmf.so.18.0 00:01:31.689 SYMLINK libspdk_vhost.so 00:01:31.689 SYMLINK libspdk_nvmf.so 00:01:31.947 LIB libspdk_iscsi.a 00:01:31.947 SO libspdk_iscsi.so.8.0 00:01:31.947 SYMLINK libspdk_iscsi.so 00:01:32.514 CC module/env_dpdk/env_dpdk_rpc.o 00:01:32.514 CC module/vfu_device/vfu_virtio.o 00:01:32.514 CC module/vfu_device/vfu_virtio_blk.o 00:01:32.514 CC module/vfu_device/vfu_virtio_rpc.o 00:01:32.514 CC module/vfu_device/vfu_virtio_scsi.o 00:01:32.514 CC module/blob/bdev/blob_bdev.o 00:01:32.514 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:32.514 CC module/scheduler/gscheduler/gscheduler.o 00:01:32.514 LIB libspdk_env_dpdk_rpc.a 00:01:32.514 CC module/accel/ioat/accel_ioat.o 00:01:32.772 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:32.772 CC module/accel/ioat/accel_ioat_rpc.o 00:01:32.772 CC module/accel/iaa/accel_iaa.o 00:01:32.772 CC module/accel/iaa/accel_iaa_rpc.o 00:01:32.772 CC module/accel/error/accel_error_rpc.o 00:01:32.772 CC module/accel/error/accel_error.o 00:01:32.772 CC module/keyring/file/keyring.o 00:01:32.772 CC module/keyring/file/keyring_rpc.o 00:01:32.772 CC module/sock/posix/posix.o 00:01:32.772 CC module/accel/dsa/accel_dsa.o 00:01:32.772 CC module/accel/dsa/accel_dsa_rpc.o 00:01:32.772 SO libspdk_env_dpdk_rpc.so.6.0 00:01:32.772 SYMLINK libspdk_env_dpdk_rpc.so 00:01:32.772 LIB libspdk_scheduler_gscheduler.a 00:01:32.772 SO libspdk_scheduler_gscheduler.so.4.0 00:01:32.772 LIB libspdk_scheduler_dynamic.a 00:01:32.772 LIB libspdk_scheduler_dpdk_governor.a 00:01:32.772 LIB libspdk_keyring_file.a 00:01:32.772 LIB libspdk_accel_ioat.a 00:01:32.772 LIB libspdk_accel_error.a 00:01:32.772 SO libspdk_scheduler_dynamic.so.4.0 00:01:32.772 SO libspdk_keyring_file.so.1.0 00:01:32.772 SYMLINK libspdk_scheduler_gscheduler.so 00:01:32.772 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:32.772 LIB libspdk_accel_iaa.a 00:01:32.772 SO libspdk_accel_ioat.so.6.0 00:01:32.772 SO libspdk_accel_error.so.2.0 00:01:32.772 LIB libspdk_blob_bdev.a 00:01:32.772 SO libspdk_accel_iaa.so.3.0 00:01:33.030 SYMLINK libspdk_scheduler_dynamic.so 00:01:33.030 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:33.030 LIB libspdk_accel_dsa.a 00:01:33.030 SYMLINK libspdk_keyring_file.so 00:01:33.030 SO libspdk_blob_bdev.so.11.0 00:01:33.030 SYMLINK libspdk_accel_ioat.so 00:01:33.030 SYMLINK libspdk_accel_error.so 00:01:33.030 SYMLINK libspdk_accel_iaa.so 00:01:33.030 SO libspdk_accel_dsa.so.5.0 00:01:33.030 SYMLINK libspdk_blob_bdev.so 00:01:33.030 SYMLINK libspdk_accel_dsa.so 00:01:33.030 LIB libspdk_vfu_device.a 00:01:33.289 SO libspdk_vfu_device.so.3.0 00:01:33.289 SYMLINK libspdk_vfu_device.so 00:01:33.289 LIB libspdk_sock_posix.a 00:01:33.289 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:33.289 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:33.548 CC module/bdev/split/vbdev_split.o 00:01:33.548 CC module/bdev/split/vbdev_split_rpc.o 00:01:33.548 CC module/bdev/iscsi/bdev_iscsi.o 00:01:33.548 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:33.548 CC module/bdev/lvol/vbdev_lvol.o 00:01:33.548 CC module/bdev/ftl/bdev_ftl.o 00:01:33.548 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:33.548 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:33.548 CC module/bdev/gpt/gpt.o 00:01:33.548 CC module/bdev/gpt/vbdev_gpt.o 00:01:33.548 CC module/bdev/aio/bdev_aio.o 00:01:33.548 SO libspdk_sock_posix.so.6.0 00:01:33.548 CC module/bdev/aio/bdev_aio_rpc.o 00:01:33.548 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:33.548 CC module/bdev/raid/bdev_raid.o 00:01:33.548 CC module/bdev/error/vbdev_error.o 00:01:33.548 CC module/bdev/delay/vbdev_delay.o 00:01:33.548 CC module/bdev/raid/bdev_raid_rpc.o 00:01:33.548 CC module/bdev/malloc/bdev_malloc.o 00:01:33.548 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:33.548 CC module/bdev/raid/bdev_raid_sb.o 00:01:33.548 CC module/bdev/null/bdev_null_rpc.o 00:01:33.548 CC module/bdev/error/vbdev_error_rpc.o 00:01:33.548 CC module/blobfs/bdev/blobfs_bdev.o 00:01:33.548 CC module/bdev/null/bdev_null.o 00:01:33.548 CC module/bdev/raid/raid0.o 00:01:33.548 CC module/bdev/nvme/nvme_rpc.o 00:01:33.548 CC module/bdev/passthru/vbdev_passthru.o 00:01:33.548 CC module/bdev/nvme/bdev_nvme.o 00:01:33.548 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:33.548 CC module/bdev/raid/raid1.o 00:01:33.548 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:33.548 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:33.548 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:33.548 CC module/bdev/raid/concat.o 00:01:33.548 CC module/bdev/nvme/bdev_mdns_client.o 00:01:33.548 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:33.548 CC module/bdev/nvme/vbdev_opal.o 00:01:33.548 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:33.548 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:33.548 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:33.548 SYMLINK libspdk_sock_posix.so 00:01:33.805 LIB libspdk_blobfs_bdev.a 00:01:33.805 SO libspdk_blobfs_bdev.so.6.0 00:01:33.805 LIB libspdk_bdev_split.a 00:01:33.805 SO libspdk_bdev_split.so.6.0 00:01:33.805 LIB libspdk_bdev_gpt.a 00:01:33.805 LIB libspdk_bdev_null.a 00:01:33.805 LIB libspdk_bdev_ftl.a 00:01:33.805 SYMLINK libspdk_blobfs_bdev.so 00:01:33.805 LIB libspdk_bdev_error.a 00:01:33.805 SO libspdk_bdev_gpt.so.6.0 00:01:33.805 SO libspdk_bdev_ftl.so.6.0 00:01:33.805 SO libspdk_bdev_null.so.6.0 00:01:33.805 SYMLINK libspdk_bdev_split.so 00:01:33.805 SO libspdk_bdev_error.so.6.0 00:01:33.805 LIB libspdk_bdev_passthru.a 00:01:33.805 LIB libspdk_bdev_zone_block.a 00:01:33.805 LIB libspdk_bdev_iscsi.a 00:01:33.805 SYMLINK libspdk_bdev_null.so 00:01:33.805 SO libspdk_bdev_passthru.so.6.0 00:01:33.805 LIB libspdk_bdev_aio.a 00:01:33.805 SYMLINK libspdk_bdev_gpt.so 00:01:33.805 SYMLINK libspdk_bdev_error.so 00:01:33.805 SYMLINK libspdk_bdev_ftl.so 00:01:33.805 SO libspdk_bdev_zone_block.so.6.0 00:01:33.805 LIB libspdk_bdev_delay.a 00:01:33.805 SO libspdk_bdev_iscsi.so.6.0 00:01:33.805 LIB libspdk_bdev_malloc.a 00:01:33.805 SO libspdk_bdev_aio.so.6.0 00:01:33.805 SO libspdk_bdev_delay.so.6.0 00:01:33.805 SYMLINK libspdk_bdev_passthru.so 00:01:34.061 SO libspdk_bdev_malloc.so.6.0 00:01:34.061 SYMLINK libspdk_bdev_zone_block.so 00:01:34.061 SYMLINK libspdk_bdev_iscsi.so 00:01:34.061 SYMLINK libspdk_bdev_aio.so 00:01:34.061 LIB libspdk_bdev_lvol.a 00:01:34.061 SYMLINK libspdk_bdev_delay.so 00:01:34.061 SYMLINK libspdk_bdev_malloc.so 00:01:34.061 SO libspdk_bdev_lvol.so.6.0 00:01:34.061 LIB libspdk_bdev_virtio.a 00:01:34.061 SO libspdk_bdev_virtio.so.6.0 00:01:34.061 SYMLINK libspdk_bdev_lvol.so 00:01:34.061 SYMLINK libspdk_bdev_virtio.so 00:01:34.318 LIB libspdk_bdev_raid.a 00:01:34.318 SO libspdk_bdev_raid.so.6.0 00:01:34.576 SYMLINK libspdk_bdev_raid.so 00:01:35.509 LIB libspdk_bdev_nvme.a 00:01:35.509 SO libspdk_bdev_nvme.so.7.0 00:01:35.767 SYMLINK libspdk_bdev_nvme.so 00:01:36.335 CC module/event/subsystems/keyring/keyring.o 00:01:36.335 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:36.335 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:36.335 CC module/event/subsystems/vmd/vmd.o 00:01:36.335 CC module/event/subsystems/sock/sock.o 00:01:36.335 CC module/event/subsystems/iobuf/iobuf.o 00:01:36.335 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:36.335 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:36.335 CC module/event/subsystems/scheduler/scheduler.o 00:01:36.335 LIB libspdk_event_keyring.a 00:01:36.335 LIB libspdk_event_vfu_tgt.a 00:01:36.335 SO libspdk_event_keyring.so.1.0 00:01:36.335 LIB libspdk_event_sock.a 00:01:36.335 LIB libspdk_event_vmd.a 00:01:36.335 SO libspdk_event_vfu_tgt.so.3.0 00:01:36.335 LIB libspdk_event_scheduler.a 00:01:36.335 LIB libspdk_event_vhost_blk.a 00:01:36.335 SO libspdk_event_vmd.so.6.0 00:01:36.335 LIB libspdk_event_iobuf.a 00:01:36.335 SYMLINK libspdk_event_keyring.so 00:01:36.335 SO libspdk_event_sock.so.5.0 00:01:36.335 SO libspdk_event_scheduler.so.4.0 00:01:36.593 SYMLINK libspdk_event_vfu_tgt.so 00:01:36.593 SO libspdk_event_vhost_blk.so.3.0 00:01:36.593 SO libspdk_event_iobuf.so.3.0 00:01:36.593 SYMLINK libspdk_event_vmd.so 00:01:36.593 SYMLINK libspdk_event_sock.so 00:01:36.593 SYMLINK libspdk_event_scheduler.so 00:01:36.593 SYMLINK libspdk_event_vhost_blk.so 00:01:36.593 SYMLINK libspdk_event_iobuf.so 00:01:36.851 CC module/event/subsystems/accel/accel.o 00:01:36.851 LIB libspdk_event_accel.a 00:01:37.109 SO libspdk_event_accel.so.6.0 00:01:37.109 SYMLINK libspdk_event_accel.so 00:01:37.366 CC module/event/subsystems/bdev/bdev.o 00:01:37.624 LIB libspdk_event_bdev.a 00:01:37.624 SO libspdk_event_bdev.so.6.0 00:01:37.624 SYMLINK libspdk_event_bdev.so 00:01:37.883 CC module/event/subsystems/scsi/scsi.o 00:01:37.883 CC module/event/subsystems/ublk/ublk.o 00:01:37.883 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:37.883 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:37.883 CC module/event/subsystems/nbd/nbd.o 00:01:38.142 LIB libspdk_event_scsi.a 00:01:38.142 LIB libspdk_event_ublk.a 00:01:38.142 LIB libspdk_event_nbd.a 00:01:38.142 SO libspdk_event_ublk.so.3.0 00:01:38.142 SO libspdk_event_scsi.so.6.0 00:01:38.142 SO libspdk_event_nbd.so.6.0 00:01:38.142 LIB libspdk_event_nvmf.a 00:01:38.142 SYMLINK libspdk_event_ublk.so 00:01:38.142 SYMLINK libspdk_event_scsi.so 00:01:38.142 SYMLINK libspdk_event_nbd.so 00:01:38.142 SO libspdk_event_nvmf.so.6.0 00:01:38.142 SYMLINK libspdk_event_nvmf.so 00:01:38.400 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:38.400 CC module/event/subsystems/iscsi/iscsi.o 00:01:38.658 LIB libspdk_event_vhost_scsi.a 00:01:38.658 LIB libspdk_event_iscsi.a 00:01:38.658 SO libspdk_event_vhost_scsi.so.3.0 00:01:38.658 SO libspdk_event_iscsi.so.6.0 00:01:38.658 SYMLINK libspdk_event_vhost_scsi.so 00:01:38.658 SYMLINK libspdk_event_iscsi.so 00:01:38.916 SO libspdk.so.6.0 00:01:38.916 SYMLINK libspdk.so 00:01:39.180 TEST_HEADER include/spdk/accel.h 00:01:39.180 CC app/spdk_lspci/spdk_lspci.o 00:01:39.180 TEST_HEADER include/spdk/accel_module.h 00:01:39.180 TEST_HEADER include/spdk/assert.h 00:01:39.180 TEST_HEADER include/spdk/barrier.h 00:01:39.180 TEST_HEADER include/spdk/base64.h 00:01:39.180 TEST_HEADER include/spdk/bdev.h 00:01:39.180 TEST_HEADER include/spdk/bdev_module.h 00:01:39.180 TEST_HEADER include/spdk/bdev_zone.h 00:01:39.180 TEST_HEADER include/spdk/bit_array.h 00:01:39.180 TEST_HEADER include/spdk/bit_pool.h 00:01:39.180 TEST_HEADER include/spdk/blob_bdev.h 00:01:39.180 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:39.180 TEST_HEADER include/spdk/blobfs.h 00:01:39.180 TEST_HEADER include/spdk/blob.h 00:01:39.180 TEST_HEADER include/spdk/conf.h 00:01:39.180 TEST_HEADER include/spdk/config.h 00:01:39.180 TEST_HEADER include/spdk/crc16.h 00:01:39.180 TEST_HEADER include/spdk/cpuset.h 00:01:39.180 CC test/rpc_client/rpc_client_test.o 00:01:39.180 TEST_HEADER include/spdk/crc32.h 00:01:39.180 CXX app/trace/trace.o 00:01:39.180 TEST_HEADER include/spdk/crc64.h 00:01:39.180 CC app/spdk_nvme_identify/identify.o 00:01:39.180 TEST_HEADER include/spdk/dif.h 00:01:39.180 TEST_HEADER include/spdk/dma.h 00:01:39.180 TEST_HEADER include/spdk/env_dpdk.h 00:01:39.180 TEST_HEADER include/spdk/env.h 00:01:39.180 TEST_HEADER include/spdk/endian.h 00:01:39.180 CC app/trace_record/trace_record.o 00:01:39.180 TEST_HEADER include/spdk/event.h 00:01:39.180 CC app/spdk_nvme_discover/discovery_aer.o 00:01:39.180 CC app/spdk_top/spdk_top.o 00:01:39.180 TEST_HEADER include/spdk/fd.h 00:01:39.180 TEST_HEADER include/spdk/fd_group.h 00:01:39.180 CC app/spdk_nvme_perf/perf.o 00:01:39.180 TEST_HEADER include/spdk/ftl.h 00:01:39.180 TEST_HEADER include/spdk/file.h 00:01:39.180 TEST_HEADER include/spdk/gpt_spec.h 00:01:39.180 TEST_HEADER include/spdk/hexlify.h 00:01:39.180 TEST_HEADER include/spdk/histogram_data.h 00:01:39.180 TEST_HEADER include/spdk/idxd.h 00:01:39.181 TEST_HEADER include/spdk/init.h 00:01:39.181 TEST_HEADER include/spdk/idxd_spec.h 00:01:39.181 TEST_HEADER include/spdk/ioat.h 00:01:39.181 TEST_HEADER include/spdk/ioat_spec.h 00:01:39.181 TEST_HEADER include/spdk/iscsi_spec.h 00:01:39.181 TEST_HEADER include/spdk/json.h 00:01:39.181 TEST_HEADER include/spdk/keyring.h 00:01:39.181 TEST_HEADER include/spdk/keyring_module.h 00:01:39.181 TEST_HEADER include/spdk/jsonrpc.h 00:01:39.181 TEST_HEADER include/spdk/likely.h 00:01:39.181 TEST_HEADER include/spdk/log.h 00:01:39.181 TEST_HEADER include/spdk/lvol.h 00:01:39.181 TEST_HEADER include/spdk/memory.h 00:01:39.181 TEST_HEADER include/spdk/mmio.h 00:01:39.181 TEST_HEADER include/spdk/notify.h 00:01:39.181 TEST_HEADER include/spdk/nbd.h 00:01:39.181 TEST_HEADER include/spdk/nvme.h 00:01:39.181 TEST_HEADER include/spdk/nvme_intel.h 00:01:39.181 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:39.181 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:39.181 TEST_HEADER include/spdk/nvme_spec.h 00:01:39.181 TEST_HEADER include/spdk/nvme_zns.h 00:01:39.181 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:39.181 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:39.181 TEST_HEADER include/spdk/nvmf.h 00:01:39.181 TEST_HEADER include/spdk/nvmf_spec.h 00:01:39.181 TEST_HEADER include/spdk/nvmf_transport.h 00:01:39.181 TEST_HEADER include/spdk/opal_spec.h 00:01:39.181 TEST_HEADER include/spdk/opal.h 00:01:39.181 TEST_HEADER include/spdk/pci_ids.h 00:01:39.181 TEST_HEADER include/spdk/pipe.h 00:01:39.181 TEST_HEADER include/spdk/queue.h 00:01:39.181 TEST_HEADER include/spdk/reduce.h 00:01:39.181 TEST_HEADER include/spdk/scheduler.h 00:01:39.181 TEST_HEADER include/spdk/rpc.h 00:01:39.181 TEST_HEADER include/spdk/scsi_spec.h 00:01:39.181 TEST_HEADER include/spdk/scsi.h 00:01:39.181 TEST_HEADER include/spdk/sock.h 00:01:39.181 TEST_HEADER include/spdk/stdinc.h 00:01:39.181 TEST_HEADER include/spdk/string.h 00:01:39.181 TEST_HEADER include/spdk/thread.h 00:01:39.181 TEST_HEADER include/spdk/trace.h 00:01:39.181 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:39.181 TEST_HEADER include/spdk/trace_parser.h 00:01:39.181 TEST_HEADER include/spdk/tree.h 00:01:39.181 TEST_HEADER include/spdk/ublk.h 00:01:39.181 TEST_HEADER include/spdk/util.h 00:01:39.181 TEST_HEADER include/spdk/uuid.h 00:01:39.181 CC app/vhost/vhost.o 00:01:39.181 TEST_HEADER include/spdk/version.h 00:01:39.181 CC app/iscsi_tgt/iscsi_tgt.o 00:01:39.181 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:39.181 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:39.181 TEST_HEADER include/spdk/vhost.h 00:01:39.181 TEST_HEADER include/spdk/xor.h 00:01:39.181 CC app/nvmf_tgt/nvmf_main.o 00:01:39.181 TEST_HEADER include/spdk/zipf.h 00:01:39.181 TEST_HEADER include/spdk/vmd.h 00:01:39.181 CC app/spdk_dd/spdk_dd.o 00:01:39.181 CXX test/cpp_headers/accel.o 00:01:39.181 CXX test/cpp_headers/accel_module.o 00:01:39.181 CXX test/cpp_headers/assert.o 00:01:39.181 CXX test/cpp_headers/base64.o 00:01:39.181 CXX test/cpp_headers/barrier.o 00:01:39.181 CXX test/cpp_headers/bdev.o 00:01:39.181 CXX test/cpp_headers/bdev_module.o 00:01:39.181 CXX test/cpp_headers/bdev_zone.o 00:01:39.181 CXX test/cpp_headers/bit_array.o 00:01:39.181 CXX test/cpp_headers/blobfs_bdev.o 00:01:39.181 CXX test/cpp_headers/bit_pool.o 00:01:39.181 CXX test/cpp_headers/blobfs.o 00:01:39.181 CXX test/cpp_headers/blob_bdev.o 00:01:39.181 CXX test/cpp_headers/blob.o 00:01:39.181 CXX test/cpp_headers/cpuset.o 00:01:39.181 CXX test/cpp_headers/conf.o 00:01:39.181 CXX test/cpp_headers/config.o 00:01:39.181 CXX test/cpp_headers/crc16.o 00:01:39.181 CXX test/cpp_headers/crc32.o 00:01:39.181 CXX test/cpp_headers/crc64.o 00:01:39.181 CXX test/cpp_headers/dif.o 00:01:39.181 CC app/spdk_tgt/spdk_tgt.o 00:01:39.448 CXX test/cpp_headers/dma.o 00:01:39.448 CC test/env/vtophys/vtophys.o 00:01:39.448 CC test/thread/poller_perf/poller_perf.o 00:01:39.448 CC test/app/jsoncat/jsoncat.o 00:01:39.448 CC examples/accel/perf/accel_perf.o 00:01:39.448 CC test/app/stub/stub.o 00:01:39.448 CC test/nvme/reset/reset.o 00:01:39.448 CC test/app/histogram_perf/histogram_perf.o 00:01:39.448 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:39.448 CC test/event/event_perf/event_perf.o 00:01:39.448 CC examples/ioat/verify/verify.o 00:01:39.448 CC test/event/reactor_perf/reactor_perf.o 00:01:39.448 CC examples/nvme/arbitration/arbitration.o 00:01:39.448 CC test/nvme/aer/aer.o 00:01:39.448 CC examples/util/zipf/zipf.o 00:01:39.448 CC test/nvme/reserve/reserve.o 00:01:39.448 CC test/env/memory/memory_ut.o 00:01:39.448 CC test/env/pci/pci_ut.o 00:01:39.448 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:39.448 CC test/nvme/sgl/sgl.o 00:01:39.448 CC examples/nvme/hello_world/hello_world.o 00:01:39.448 CC examples/ioat/perf/perf.o 00:01:39.448 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:39.448 CC test/event/reactor/reactor.o 00:01:39.448 CC test/nvme/simple_copy/simple_copy.o 00:01:39.448 CC test/dma/test_dma/test_dma.o 00:01:39.448 CC app/fio/nvme/fio_plugin.o 00:01:39.448 CC examples/vmd/led/led.o 00:01:39.448 CC test/nvme/connect_stress/connect_stress.o 00:01:39.448 CC test/nvme/e2edp/nvme_dp.o 00:01:39.448 CC examples/nvme/abort/abort.o 00:01:39.448 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:39.448 CC examples/idxd/perf/perf.o 00:01:39.448 CC test/bdev/bdevio/bdevio.o 00:01:39.448 CC examples/nvme/reconnect/reconnect.o 00:01:39.448 CC examples/blob/hello_world/hello_blob.o 00:01:39.448 CC test/nvme/startup/startup.o 00:01:39.448 CC examples/nvme/hotplug/hotplug.o 00:01:39.448 CC test/event/app_repeat/app_repeat.o 00:01:39.448 CC examples/vmd/lsvmd/lsvmd.o 00:01:39.448 CC examples/sock/hello_world/hello_sock.o 00:01:39.448 CC test/app/bdev_svc/bdev_svc.o 00:01:39.448 CC test/nvme/err_injection/err_injection.o 00:01:39.448 CC test/event/scheduler/scheduler.o 00:01:39.448 CC examples/bdev/hello_world/hello_bdev.o 00:01:39.448 CC test/nvme/fdp/fdp.o 00:01:39.448 CC test/nvme/cuse/cuse.o 00:01:39.448 CC test/blobfs/mkfs/mkfs.o 00:01:39.448 CC test/nvme/compliance/nvme_compliance.o 00:01:39.448 CC test/nvme/overhead/overhead.o 00:01:39.448 CC examples/blob/cli/blobcli.o 00:01:39.448 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:39.448 CC test/nvme/boot_partition/boot_partition.o 00:01:39.448 CC test/nvme/fused_ordering/fused_ordering.o 00:01:39.448 CC examples/thread/thread/thread_ex.o 00:01:39.448 CC app/fio/bdev/fio_plugin.o 00:01:39.448 CC test/accel/dif/dif.o 00:01:39.448 CC examples/bdev/bdevperf/bdevperf.o 00:01:39.448 CC examples/nvmf/nvmf/nvmf.o 00:01:39.711 LINK spdk_lspci 00:01:39.711 LINK rpc_client_test 00:01:39.711 LINK spdk_nvme_discover 00:01:39.711 LINK interrupt_tgt 00:01:39.711 CC test/lvol/esnap/esnap.o 00:01:39.711 LINK vhost 00:01:39.711 CC test/env/mem_callbacks/mem_callbacks.o 00:01:39.711 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:39.711 LINK vtophys 00:01:39.711 CXX test/cpp_headers/env_dpdk.o 00:01:39.711 CXX test/cpp_headers/endian.o 00:01:39.711 CXX test/cpp_headers/env.o 00:01:39.711 CXX test/cpp_headers/event.o 00:01:39.711 CXX test/cpp_headers/fd_group.o 00:01:39.711 LINK spdk_trace_record 00:01:39.711 CXX test/cpp_headers/fd.o 00:01:39.711 LINK lsvmd 00:01:39.711 LINK stub 00:01:39.970 CXX test/cpp_headers/file.o 00:01:39.970 LINK nvmf_tgt 00:01:39.970 LINK cmb_copy 00:01:39.970 LINK jsoncat 00:01:39.970 CXX test/cpp_headers/ftl.o 00:01:39.970 LINK bdev_svc 00:01:39.970 LINK iscsi_tgt 00:01:39.970 CXX test/cpp_headers/gpt_spec.o 00:01:39.970 LINK pmr_persistence 00:01:39.970 LINK poller_perf 00:01:39.970 LINK reactor_perf 00:01:39.970 LINK err_injection 00:01:39.970 LINK event_perf 00:01:39.970 LINK histogram_perf 00:01:39.970 LINK reserve 00:01:39.970 LINK spdk_tgt 00:01:39.970 LINK zipf 00:01:39.970 LINK led 00:01:39.970 LINK reactor 00:01:39.970 CXX test/cpp_headers/histogram_data.o 00:01:39.970 CXX test/cpp_headers/hexlify.o 00:01:39.970 CXX test/cpp_headers/idxd.o 00:01:39.970 LINK boot_partition 00:01:39.970 LINK scheduler 00:01:39.970 CXX test/cpp_headers/idxd_spec.o 00:01:39.970 LINK hello_world 00:01:39.970 LINK env_dpdk_post_init 00:01:39.970 LINK app_repeat 00:01:39.970 LINK hello_bdev 00:01:39.970 LINK hello_blob 00:01:39.970 CXX test/cpp_headers/init.o 00:01:39.970 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:39.970 CXX test/cpp_headers/ioat.o 00:01:39.970 CXX test/cpp_headers/ioat_spec.o 00:01:39.970 LINK reset 00:01:39.970 CXX test/cpp_headers/iscsi_spec.o 00:01:39.970 LINK connect_stress 00:01:39.970 CXX test/cpp_headers/json.o 00:01:39.970 LINK simple_copy 00:01:39.970 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:39.970 LINK startup 00:01:39.970 LINK spdk_dd 00:01:39.970 LINK hello_sock 00:01:39.970 CXX test/cpp_headers/jsonrpc.o 00:01:39.970 LINK thread 00:01:39.970 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:39.970 CXX test/cpp_headers/keyring.o 00:01:39.970 CXX test/cpp_headers/keyring_module.o 00:01:39.970 LINK aer 00:01:39.970 CXX test/cpp_headers/likely.o 00:01:39.970 LINK mkfs 00:01:39.970 LINK doorbell_aers 00:01:39.970 LINK verify 00:01:40.230 CXX test/cpp_headers/log.o 00:01:40.230 CXX test/cpp_headers/lvol.o 00:01:40.230 LINK fused_ordering 00:01:40.230 LINK ioat_perf 00:01:40.230 LINK spdk_trace 00:01:40.230 LINK hotplug 00:01:40.230 CXX test/cpp_headers/memory.o 00:01:40.230 LINK fdp 00:01:40.230 CXX test/cpp_headers/mmio.o 00:01:40.230 LINK nvme_dp 00:01:40.230 CXX test/cpp_headers/nbd.o 00:01:40.230 CXX test/cpp_headers/notify.o 00:01:40.230 CXX test/cpp_headers/nvme.o 00:01:40.230 LINK sgl 00:01:40.230 CXX test/cpp_headers/nvme_intel.o 00:01:40.230 CXX test/cpp_headers/nvme_ocssd.o 00:01:40.230 LINK reconnect 00:01:40.230 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:40.230 CXX test/cpp_headers/nvme_spec.o 00:01:40.230 LINK nvme_compliance 00:01:40.230 CXX test/cpp_headers/nvme_zns.o 00:01:40.230 CXX test/cpp_headers/nvmf_cmd.o 00:01:40.230 CXX test/cpp_headers/nvmf.o 00:01:40.230 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:40.230 LINK abort 00:01:40.230 CXX test/cpp_headers/nvmf_transport.o 00:01:40.230 CXX test/cpp_headers/nvmf_spec.o 00:01:40.230 CXX test/cpp_headers/opal.o 00:01:40.230 LINK nvmf 00:01:40.230 CXX test/cpp_headers/opal_spec.o 00:01:40.230 LINK arbitration 00:01:40.230 CXX test/cpp_headers/pci_ids.o 00:01:40.230 LINK bdevio 00:01:40.230 CXX test/cpp_headers/pipe.o 00:01:40.230 LINK overhead 00:01:40.230 CXX test/cpp_headers/queue.o 00:01:40.230 CXX test/cpp_headers/reduce.o 00:01:40.230 CXX test/cpp_headers/rpc.o 00:01:40.230 LINK dif 00:01:40.230 CXX test/cpp_headers/scheduler.o 00:01:40.230 LINK idxd_perf 00:01:40.230 CXX test/cpp_headers/scsi.o 00:01:40.230 CXX test/cpp_headers/scsi_spec.o 00:01:40.230 CXX test/cpp_headers/sock.o 00:01:40.230 CXX test/cpp_headers/string.o 00:01:40.230 CXX test/cpp_headers/stdinc.o 00:01:40.230 CXX test/cpp_headers/thread.o 00:01:40.489 CXX test/cpp_headers/trace.o 00:01:40.489 CXX test/cpp_headers/trace_parser.o 00:01:40.489 CXX test/cpp_headers/tree.o 00:01:40.489 CXX test/cpp_headers/ublk.o 00:01:40.489 CXX test/cpp_headers/util.o 00:01:40.489 CXX test/cpp_headers/uuid.o 00:01:40.489 CXX test/cpp_headers/version.o 00:01:40.489 CXX test/cpp_headers/vfio_user_pci.o 00:01:40.489 CXX test/cpp_headers/vfio_user_spec.o 00:01:40.489 CXX test/cpp_headers/vhost.o 00:01:40.489 CXX test/cpp_headers/vmd.o 00:01:40.489 CXX test/cpp_headers/xor.o 00:01:40.489 CXX test/cpp_headers/zipf.o 00:01:40.489 LINK test_dma 00:01:40.489 LINK pci_ut 00:01:40.489 LINK blobcli 00:01:40.489 LINK nvme_fuzz 00:01:40.489 LINK accel_perf 00:01:40.489 LINK mem_callbacks 00:01:40.747 LINK spdk_bdev 00:01:40.747 LINK nvme_manage 00:01:40.747 LINK spdk_nvme 00:01:40.747 LINK spdk_nvme_perf 00:01:40.747 LINK bdevperf 00:01:40.747 LINK spdk_top 00:01:40.747 LINK spdk_nvme_identify 00:01:41.006 LINK vhost_fuzz 00:01:41.006 LINK memory_ut 00:01:41.006 LINK cuse 00:01:41.945 LINK iscsi_fuzz 00:01:44.482 LINK esnap 00:01:44.742 00:01:44.742 real 0m46.622s 00:01:44.742 user 7m6.927s 00:01:44.742 sys 3m34.671s 00:01:44.742 15:44:24 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:44.742 15:44:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.742 ************************************ 00:01:44.742 END TEST make 00:01:44.742 ************************************ 00:01:44.742 15:44:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:44.742 15:44:24 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:44.742 15:44:24 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:44.742 15:44:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.742 15:44:24 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:44.742 15:44:24 -- pm/common@45 -- $ pid=2147575 00:01:44.742 15:44:24 -- pm/common@52 -- $ sudo kill -TERM 2147575 00:01:44.742 15:44:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.742 15:44:24 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:44.742 15:44:24 -- pm/common@45 -- $ pid=2147581 00:01:44.742 15:44:24 -- pm/common@52 -- $ sudo kill -TERM 2147581 00:01:44.742 15:44:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.742 15:44:24 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:44.742 15:44:24 -- pm/common@45 -- $ pid=2147579 00:01:44.742 15:44:24 -- pm/common@52 -- $ sudo kill -TERM 2147579 00:01:44.742 15:44:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.742 15:44:24 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:44.742 15:44:24 -- pm/common@45 -- $ pid=2147582 00:01:44.742 15:44:24 -- pm/common@52 -- $ sudo kill -TERM 2147582 00:01:45.013 15:44:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:45.013 15:44:24 -- nvmf/common.sh@7 -- # uname -s 00:01:45.013 15:44:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:45.013 15:44:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:45.013 15:44:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:45.013 15:44:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:45.013 15:44:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:45.013 15:44:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:45.013 15:44:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:45.013 15:44:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:45.013 15:44:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:45.013 15:44:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:45.013 15:44:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:45.014 15:44:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:45.014 15:44:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:45.014 15:44:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:45.014 15:44:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:45.014 15:44:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:45.014 15:44:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:45.014 15:44:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:45.014 15:44:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:45.014 15:44:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:45.014 15:44:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.014 15:44:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.014 15:44:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.014 15:44:24 -- paths/export.sh@5 -- # export PATH 00:01:45.014 15:44:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:45.014 15:44:24 -- nvmf/common.sh@47 -- # : 0 00:01:45.014 15:44:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:45.014 15:44:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:45.014 15:44:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:45.014 15:44:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:45.014 15:44:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:45.014 15:44:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:45.014 15:44:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:45.014 15:44:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:45.014 15:44:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:45.014 15:44:24 -- spdk/autotest.sh@32 -- # uname -s 00:01:45.014 15:44:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:45.014 15:44:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:45.014 15:44:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:45.014 15:44:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:45.014 15:44:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:45.014 15:44:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:45.014 15:44:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:45.014 15:44:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:45.014 15:44:24 -- spdk/autotest.sh@48 -- # udevadm_pid=2206487 00:01:45.014 15:44:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:45.014 15:44:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:45.014 15:44:24 -- pm/common@17 -- # local monitor 00:01:45.014 15:44:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.014 15:44:24 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2206489 00:01:45.014 15:44:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.014 15:44:24 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2206493 00:01:45.014 15:44:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.014 15:44:24 -- pm/common@21 -- # date +%s 00:01:45.014 15:44:24 -- pm/common@21 -- # date +%s 00:01:45.014 15:44:24 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2206495 00:01:45.014 15:44:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:45.014 15:44:24 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2206499 00:01:45.014 15:44:24 -- pm/common@26 -- # sleep 1 00:01:45.014 15:44:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714139064 00:01:45.014 15:44:24 -- pm/common@21 -- # date +%s 00:01:45.014 15:44:24 -- pm/common@21 -- # date +%s 00:01:45.014 15:44:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714139064 00:01:45.014 15:44:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714139064 00:01:45.014 15:44:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714139064 00:01:45.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714139064_collect-vmstat.pm.log 00:01:45.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714139064_collect-cpu-load.pm.log 00:01:45.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714139064_collect-bmc-pm.bmc.pm.log 00:01:45.014 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714139064_collect-cpu-temp.pm.log 00:01:46.029 15:44:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:46.029 15:44:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:46.029 15:44:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:46.029 15:44:25 -- common/autotest_common.sh@10 -- # set +x 00:01:46.029 15:44:25 -- spdk/autotest.sh@59 -- # create_test_list 00:01:46.029 15:44:25 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:46.029 15:44:25 -- common/autotest_common.sh@10 -- # set +x 00:01:46.029 15:44:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:46.029 15:44:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.029 15:44:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.029 15:44:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:46.029 15:44:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.029 15:44:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:46.029 15:44:25 -- common/autotest_common.sh@1441 -- # uname 00:01:46.029 15:44:25 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:46.029 15:44:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:46.029 15:44:25 -- common/autotest_common.sh@1461 -- # uname 00:01:46.029 15:44:25 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:46.029 15:44:25 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:46.029 15:44:25 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:46.029 15:44:25 -- spdk/autotest.sh@72 -- # hash lcov 00:01:46.029 15:44:25 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:46.029 15:44:25 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:46.029 --rc lcov_branch_coverage=1 00:01:46.029 --rc lcov_function_coverage=1 00:01:46.029 --rc genhtml_branch_coverage=1 00:01:46.029 --rc genhtml_function_coverage=1 00:01:46.029 --rc genhtml_legend=1 00:01:46.029 --rc geninfo_all_blocks=1 00:01:46.029 ' 00:01:46.029 15:44:25 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:46.029 --rc lcov_branch_coverage=1 00:01:46.029 --rc lcov_function_coverage=1 00:01:46.029 --rc genhtml_branch_coverage=1 00:01:46.029 --rc genhtml_function_coverage=1 00:01:46.029 --rc genhtml_legend=1 00:01:46.029 --rc geninfo_all_blocks=1 00:01:46.029 ' 00:01:46.029 15:44:25 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:46.029 --rc lcov_branch_coverage=1 00:01:46.029 --rc lcov_function_coverage=1 00:01:46.029 --rc genhtml_branch_coverage=1 00:01:46.029 --rc genhtml_function_coverage=1 00:01:46.029 --rc genhtml_legend=1 00:01:46.029 --rc geninfo_all_blocks=1 00:01:46.029 --no-external' 00:01:46.029 15:44:25 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:46.029 --rc lcov_branch_coverage=1 00:01:46.029 --rc lcov_function_coverage=1 00:01:46.029 --rc genhtml_branch_coverage=1 00:01:46.029 --rc genhtml_function_coverage=1 00:01:46.029 --rc genhtml_legend=1 00:01:46.029 --rc geninfo_all_blocks=1 00:01:46.029 --no-external' 00:01:46.029 15:44:25 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:46.029 lcov: LCOV version 1.14 00:01:46.029 15:44:25 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:52.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:52.604 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:52.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:52.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:52.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:52.865 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:01:52.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:52.865 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:56.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:56.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:04.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:04.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:04.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:04.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:04.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:04.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:09.640 15:44:49 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:09.640 15:44:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:09.640 15:44:49 -- common/autotest_common.sh@10 -- # set +x 00:02:09.640 15:44:49 -- spdk/autotest.sh@91 -- # rm -f 00:02:09.640 15:44:49 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:12.177 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:12.177 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:12.177 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:12.178 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:12.178 15:44:51 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:12.178 15:44:51 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:12.178 15:44:51 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:12.178 15:44:51 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:12.178 15:44:51 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:12.178 15:44:51 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:12.178 15:44:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:12.178 15:44:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:12.178 15:44:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:12.178 15:44:51 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:12.178 15:44:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:12.178 15:44:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:12.178 15:44:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:12.178 15:44:51 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:12.178 15:44:51 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:12.437 No valid GPT data, bailing 00:02:12.437 15:44:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:12.437 15:44:51 -- scripts/common.sh@391 -- # pt= 00:02:12.437 15:44:51 -- scripts/common.sh@392 -- # return 1 00:02:12.437 15:44:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:12.437 1+0 records in 00:02:12.437 1+0 records out 00:02:12.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00191484 s, 548 MB/s 00:02:12.437 15:44:51 -- spdk/autotest.sh@118 -- # sync 00:02:12.437 15:44:51 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:12.437 15:44:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:12.437 15:44:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:17.718 15:44:57 -- spdk/autotest.sh@124 -- # uname -s 00:02:17.718 15:44:57 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:17.718 15:44:57 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:17.718 15:44:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:17.718 15:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:17.718 15:44:57 -- common/autotest_common.sh@10 -- # set +x 00:02:17.718 ************************************ 00:02:17.718 START TEST setup.sh 00:02:17.718 ************************************ 00:02:17.718 15:44:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:17.718 * Looking for test storage... 00:02:17.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:17.718 15:44:57 -- setup/test-setup.sh@10 -- # uname -s 00:02:17.718 15:44:57 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:17.718 15:44:57 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:17.718 15:44:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:17.718 15:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:17.718 15:44:57 -- common/autotest_common.sh@10 -- # set +x 00:02:17.718 ************************************ 00:02:17.718 START TEST acl 00:02:17.718 ************************************ 00:02:17.718 15:44:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:17.977 * Looking for test storage... 00:02:17.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:17.977 15:44:57 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:17.977 15:44:57 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:17.977 15:44:57 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:17.977 15:44:57 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:17.977 15:44:57 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:17.977 15:44:57 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:17.977 15:44:57 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:17.977 15:44:57 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:17.977 15:44:57 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:17.977 15:44:57 -- setup/acl.sh@12 -- # devs=() 00:02:17.977 15:44:57 -- setup/acl.sh@12 -- # declare -a devs 00:02:17.977 15:44:57 -- setup/acl.sh@13 -- # drivers=() 00:02:17.977 15:44:57 -- setup/acl.sh@13 -- # declare -A drivers 00:02:17.977 15:44:57 -- setup/acl.sh@51 -- # setup reset 00:02:17.977 15:44:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:17.977 15:44:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.300 15:45:00 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:21.300 15:45:00 -- setup/acl.sh@16 -- # local dev driver 00:02:21.300 15:45:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:21.300 15:45:00 -- setup/acl.sh@15 -- # setup output status 00:02:21.300 15:45:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:21.300 15:45:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:23.206 Hugepages 00:02:23.206 node hugesize free / total 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 00:02:23.206 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.206 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.206 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.206 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:23.465 15:45:02 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:23.465 15:45:02 -- setup/acl.sh@20 -- # continue 00:02:23.465 15:45:02 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:23.465 15:45:02 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:23.465 15:45:02 -- setup/acl.sh@54 -- # run_test denied denied 00:02:23.465 15:45:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.465 15:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.465 15:45:02 -- common/autotest_common.sh@10 -- # set +x 00:02:23.465 ************************************ 00:02:23.465 START TEST denied 00:02:23.465 ************************************ 00:02:23.465 15:45:03 -- common/autotest_common.sh@1111 -- # denied 00:02:23.465 15:45:03 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:23.465 15:45:03 -- setup/acl.sh@38 -- # setup output config 00:02:23.465 15:45:03 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:23.465 15:45:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.465 15:45:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:26.752 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:26.752 15:45:06 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:26.752 15:45:06 -- setup/acl.sh@28 -- # local dev driver 00:02:26.752 15:45:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:26.752 15:45:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:26.752 15:45:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:26.752 15:45:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:26.752 15:45:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:26.752 15:45:06 -- setup/acl.sh@41 -- # setup reset 00:02:26.752 15:45:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:26.752 15:45:06 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.949 00:02:30.949 real 0m6.845s 00:02:30.949 user 0m2.254s 00:02:30.949 sys 0m3.927s 00:02:30.949 15:45:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:30.949 15:45:09 -- common/autotest_common.sh@10 -- # set +x 00:02:30.949 ************************************ 00:02:30.949 END TEST denied 00:02:30.949 ************************************ 00:02:30.949 15:45:10 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:30.949 15:45:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:30.949 15:45:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:30.949 15:45:10 -- common/autotest_common.sh@10 -- # set +x 00:02:30.949 ************************************ 00:02:30.949 START TEST allowed 00:02:30.949 ************************************ 00:02:30.949 15:45:10 -- common/autotest_common.sh@1111 -- # allowed 00:02:30.949 15:45:10 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:30.949 15:45:10 -- setup/acl.sh@45 -- # setup output config 00:02:30.949 15:45:10 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:30.949 15:45:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.949 15:45:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:35.152 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:35.152 15:45:13 -- setup/acl.sh@47 -- # verify 00:02:35.152 15:45:13 -- setup/acl.sh@28 -- # local dev driver 00:02:35.152 15:45:13 -- setup/acl.sh@48 -- # setup reset 00:02:35.152 15:45:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:35.152 15:45:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:37.084 00:02:37.084 real 0m6.448s 00:02:37.084 user 0m1.956s 00:02:37.084 sys 0m3.586s 00:02:37.084 15:45:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:37.084 15:45:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.084 ************************************ 00:02:37.084 END TEST allowed 00:02:37.084 ************************************ 00:02:37.084 00:02:37.084 real 0m19.239s 00:02:37.084 user 0m6.338s 00:02:37.084 sys 0m11.381s 00:02:37.084 15:45:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:37.084 15:45:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.084 ************************************ 00:02:37.084 END TEST acl 00:02:37.084 ************************************ 00:02:37.084 15:45:16 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:37.084 15:45:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.084 15:45:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.084 15:45:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.345 ************************************ 00:02:37.345 START TEST hugepages 00:02:37.345 ************************************ 00:02:37.345 15:45:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:37.345 * Looking for test storage... 00:02:37.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.345 15:45:16 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:37.345 15:45:16 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:37.345 15:45:16 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:37.345 15:45:16 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:37.346 15:45:16 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:37.346 15:45:16 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:37.346 15:45:16 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:37.346 15:45:16 -- setup/common.sh@18 -- # local node= 00:02:37.346 15:45:16 -- setup/common.sh@19 -- # local var val 00:02:37.346 15:45:16 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.346 15:45:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.346 15:45:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.346 15:45:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.346 15:45:16 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.346 15:45:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 168592672 kB' 'MemAvailable: 172416252 kB' 'Buffers: 3888 kB' 'Cached: 14430592 kB' 'SwapCached: 0 kB' 'Active: 11401604 kB' 'Inactive: 3663216 kB' 'Active(anon): 10344184 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633620 kB' 'Mapped: 246020 kB' 'Shmem: 9713844 kB' 'KReclaimable: 494316 kB' 'Slab: 1130468 kB' 'SReclaimable: 494316 kB' 'SUnreclaim: 636152 kB' 'KernelStack: 20800 kB' 'PageTables: 10488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982040 kB' 'Committed_AS: 11861260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316196 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.346 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.346 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # continue 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.347 15:45:16 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.347 15:45:16 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.347 15:45:16 -- setup/common.sh@33 -- # echo 2048 00:02:37.347 15:45:16 -- setup/common.sh@33 -- # return 0 00:02:37.347 15:45:16 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:37.347 15:45:16 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:37.347 15:45:16 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:37.347 15:45:16 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:37.347 15:45:16 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:37.347 15:45:16 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:37.347 15:45:16 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:37.347 15:45:16 -- setup/hugepages.sh@207 -- # get_nodes 00:02:37.347 15:45:16 -- setup/hugepages.sh@27 -- # local node 00:02:37.347 15:45:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.347 15:45:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:37.347 15:45:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.347 15:45:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:37.347 15:45:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.347 15:45:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.347 15:45:16 -- setup/hugepages.sh@208 -- # clear_hp 00:02:37.347 15:45:16 -- setup/hugepages.sh@37 -- # local node hp 00:02:37.347 15:45:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:37.347 15:45:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.347 15:45:16 -- setup/hugepages.sh@41 -- # echo 0 00:02:37.347 15:45:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.347 15:45:16 -- setup/hugepages.sh@41 -- # echo 0 00:02:37.347 15:45:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:37.347 15:45:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.347 15:45:16 -- setup/hugepages.sh@41 -- # echo 0 00:02:37.347 15:45:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.347 15:45:16 -- setup/hugepages.sh@41 -- # echo 0 00:02:37.347 15:45:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:37.347 15:45:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:37.347 15:45:16 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:37.347 15:45:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.347 15:45:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.347 15:45:16 -- common/autotest_common.sh@10 -- # set +x 00:02:37.607 ************************************ 00:02:37.607 START TEST default_setup 00:02:37.607 ************************************ 00:02:37.607 15:45:17 -- common/autotest_common.sh@1111 -- # default_setup 00:02:37.607 15:45:17 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:37.607 15:45:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:37.607 15:45:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:37.607 15:45:17 -- setup/hugepages.sh@51 -- # shift 00:02:37.607 15:45:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:37.607 15:45:17 -- setup/hugepages.sh@52 -- # local node_ids 00:02:37.607 15:45:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.607 15:45:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:37.607 15:45:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:37.607 15:45:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:37.607 15:45:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.607 15:45:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.607 15:45:17 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.607 15:45:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.607 15:45:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.607 15:45:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:37.607 15:45:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:37.608 15:45:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:37.608 15:45:17 -- setup/hugepages.sh@73 -- # return 0 00:02:37.608 15:45:17 -- setup/hugepages.sh@137 -- # setup output 00:02:37.608 15:45:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.608 15:45:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:40.165 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:40.165 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:40.425 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:40.425 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:40.425 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:40.993 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:41.257 15:45:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:41.257 15:45:20 -- setup/hugepages.sh@89 -- # local node 00:02:41.257 15:45:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.257 15:45:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.257 15:45:20 -- setup/hugepages.sh@92 -- # local surp 00:02:41.257 15:45:20 -- setup/hugepages.sh@93 -- # local resv 00:02:41.257 15:45:20 -- setup/hugepages.sh@94 -- # local anon 00:02:41.257 15:45:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.257 15:45:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.257 15:45:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.257 15:45:20 -- setup/common.sh@18 -- # local node= 00:02:41.257 15:45:20 -- setup/common.sh@19 -- # local var val 00:02:41.257 15:45:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.257 15:45:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.257 15:45:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.257 15:45:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.257 15:45:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.257 15:45:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170728176 kB' 'MemAvailable: 174551676 kB' 'Buffers: 3888 kB' 'Cached: 14430700 kB' 'SwapCached: 0 kB' 'Active: 11415072 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357652 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646780 kB' 'Mapped: 246128 kB' 'Shmem: 9713952 kB' 'KReclaimable: 494156 kB' 'Slab: 1128224 kB' 'SReclaimable: 494156 kB' 'SUnreclaim: 634068 kB' 'KernelStack: 20576 kB' 'PageTables: 10256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11876884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315988 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.257 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.257 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.258 15:45:20 -- setup/common.sh@33 -- # echo 0 00:02:41.258 15:45:20 -- setup/common.sh@33 -- # return 0 00:02:41.258 15:45:20 -- setup/hugepages.sh@97 -- # anon=0 00:02:41.258 15:45:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.258 15:45:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.258 15:45:20 -- setup/common.sh@18 -- # local node= 00:02:41.258 15:45:20 -- setup/common.sh@19 -- # local var val 00:02:41.258 15:45:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.258 15:45:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.258 15:45:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.258 15:45:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.258 15:45:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.258 15:45:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.258 15:45:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170730996 kB' 'MemAvailable: 174554480 kB' 'Buffers: 3888 kB' 'Cached: 14430704 kB' 'SwapCached: 0 kB' 'Active: 11415052 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357632 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647220 kB' 'Mapped: 246060 kB' 'Shmem: 9713956 kB' 'KReclaimable: 494124 kB' 'Slab: 1128184 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 634060 kB' 'KernelStack: 20608 kB' 'PageTables: 10324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11876896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315988 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.258 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.258 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.259 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.259 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.259 15:45:20 -- setup/common.sh@33 -- # echo 0 00:02:41.259 15:45:20 -- setup/common.sh@33 -- # return 0 00:02:41.259 15:45:20 -- setup/hugepages.sh@99 -- # surp=0 00:02:41.259 15:45:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.259 15:45:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.259 15:45:20 -- setup/common.sh@18 -- # local node= 00:02:41.259 15:45:20 -- setup/common.sh@19 -- # local var val 00:02:41.259 15:45:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.259 15:45:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.259 15:45:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.259 15:45:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.259 15:45:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.260 15:45:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170730396 kB' 'MemAvailable: 174553880 kB' 'Buffers: 3888 kB' 'Cached: 14430716 kB' 'SwapCached: 0 kB' 'Active: 11414772 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357352 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646920 kB' 'Mapped: 246060 kB' 'Shmem: 9713968 kB' 'KReclaimable: 494124 kB' 'Slab: 1128224 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 634100 kB' 'KernelStack: 20608 kB' 'PageTables: 10348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11876912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315988 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.260 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.260 15:45:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.261 15:45:20 -- setup/common.sh@33 -- # echo 0 00:02:41.261 15:45:20 -- setup/common.sh@33 -- # return 0 00:02:41.261 15:45:20 -- setup/hugepages.sh@100 -- # resv=0 00:02:41.261 15:45:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:41.261 nr_hugepages=1024 00:02:41.261 15:45:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.261 resv_hugepages=0 00:02:41.261 15:45:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.261 surplus_hugepages=0 00:02:41.261 15:45:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.261 anon_hugepages=0 00:02:41.261 15:45:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.261 15:45:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:41.261 15:45:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.261 15:45:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.261 15:45:20 -- setup/common.sh@18 -- # local node= 00:02:41.261 15:45:20 -- setup/common.sh@19 -- # local var val 00:02:41.261 15:45:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.261 15:45:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.261 15:45:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.261 15:45:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.261 15:45:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.261 15:45:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170727372 kB' 'MemAvailable: 174550856 kB' 'Buffers: 3888 kB' 'Cached: 14430728 kB' 'SwapCached: 0 kB' 'Active: 11414524 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357104 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646704 kB' 'Mapped: 246060 kB' 'Shmem: 9713980 kB' 'KReclaimable: 494124 kB' 'Slab: 1128220 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 634096 kB' 'KernelStack: 20608 kB' 'PageTables: 10348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11876928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315988 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.261 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.261 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.262 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.262 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.262 15:45:20 -- setup/common.sh@33 -- # echo 1024 00:02:41.262 15:45:20 -- setup/common.sh@33 -- # return 0 00:02:41.262 15:45:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.262 15:45:20 -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.262 15:45:20 -- setup/hugepages.sh@27 -- # local node 00:02:41.263 15:45:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.263 15:45:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:41.263 15:45:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.263 15:45:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:41.263 15:45:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.263 15:45:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.263 15:45:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.263 15:45:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.263 15:45:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.263 15:45:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.263 15:45:20 -- setup/common.sh@18 -- # local node=0 00:02:41.263 15:45:20 -- setup/common.sh@19 -- # local var val 00:02:41.263 15:45:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.263 15:45:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.263 15:45:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.263 15:45:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.263 15:45:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.263 15:45:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90899384 kB' 'MemUsed: 6716244 kB' 'SwapCached: 0 kB' 'Active: 3202540 kB' 'Inactive: 133364 kB' 'Active(anon): 2759948 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 133364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2884208 kB' 'Mapped: 101000 kB' 'AnonPages: 454928 kB' 'Shmem: 2308252 kB' 'KernelStack: 11368 kB' 'PageTables: 5792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 268388 kB' 'Slab: 572816 kB' 'SReclaimable: 268388 kB' 'SUnreclaim: 304428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.263 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.263 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 15:45:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.264 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.264 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 15:45:20 -- setup/common.sh@32 -- # continue 00:02:41.264 15:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.264 15:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.264 15:45:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.264 15:45:20 -- setup/common.sh@33 -- # echo 0 00:02:41.264 15:45:20 -- setup/common.sh@33 -- # return 0 00:02:41.264 15:45:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.264 15:45:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.264 15:45:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.264 15:45:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.264 15:45:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:41.264 node0=1024 expecting 1024 00:02:41.264 15:45:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:41.264 00:02:41.264 real 0m3.880s 00:02:41.264 user 0m1.261s 00:02:41.264 sys 0m1.903s 00:02:41.264 15:45:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:41.264 15:45:20 -- common/autotest_common.sh@10 -- # set +x 00:02:41.264 ************************************ 00:02:41.264 END TEST default_setup 00:02:41.264 ************************************ 00:02:41.523 15:45:20 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:41.524 15:45:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:41.524 15:45:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:41.524 15:45:20 -- common/autotest_common.sh@10 -- # set +x 00:02:41.524 ************************************ 00:02:41.524 START TEST per_node_1G_alloc 00:02:41.524 ************************************ 00:02:41.524 15:45:21 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:41.524 15:45:21 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:41.524 15:45:21 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:41.524 15:45:21 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:41.524 15:45:21 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:41.524 15:45:21 -- setup/hugepages.sh@51 -- # shift 00:02:41.524 15:45:21 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:41.524 15:45:21 -- setup/hugepages.sh@52 -- # local node_ids 00:02:41.524 15:45:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.524 15:45:21 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:41.524 15:45:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:41.524 15:45:21 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:41.524 15:45:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.524 15:45:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:41.524 15:45:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.524 15:45:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.524 15:45:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.524 15:45:21 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:41.524 15:45:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:41.524 15:45:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:41.524 15:45:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:41.524 15:45:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:41.524 15:45:21 -- setup/hugepages.sh@73 -- # return 0 00:02:41.524 15:45:21 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:41.524 15:45:21 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:41.524 15:45:21 -- setup/hugepages.sh@146 -- # setup output 00:02:41.524 15:45:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.524 15:45:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.061 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.061 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:44.061 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:44.061 15:45:23 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:44.061 15:45:23 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:44.061 15:45:23 -- setup/hugepages.sh@89 -- # local node 00:02:44.061 15:45:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.061 15:45:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.061 15:45:23 -- setup/hugepages.sh@92 -- # local surp 00:02:44.061 15:45:23 -- setup/hugepages.sh@93 -- # local resv 00:02:44.061 15:45:23 -- setup/hugepages.sh@94 -- # local anon 00:02:44.061 15:45:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.061 15:45:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.061 15:45:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.061 15:45:23 -- setup/common.sh@18 -- # local node= 00:02:44.061 15:45:23 -- setup/common.sh@19 -- # local var val 00:02:44.061 15:45:23 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.061 15:45:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.061 15:45:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.061 15:45:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.061 15:45:23 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.061 15:45:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170707364 kB' 'MemAvailable: 174530848 kB' 'Buffers: 3888 kB' 'Cached: 14430820 kB' 'SwapCached: 0 kB' 'Active: 11415920 kB' 'Inactive: 3663216 kB' 'Active(anon): 10358500 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647784 kB' 'Mapped: 246620 kB' 'Shmem: 9714072 kB' 'KReclaimable: 494124 kB' 'Slab: 1128308 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 634184 kB' 'KernelStack: 20624 kB' 'PageTables: 10404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11879776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316068 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.061 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.061 15:45:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.062 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.062 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.326 15:45:23 -- setup/common.sh@33 -- # echo 0 00:02:44.326 15:45:23 -- setup/common.sh@33 -- # return 0 00:02:44.326 15:45:23 -- setup/hugepages.sh@97 -- # anon=0 00:02:44.326 15:45:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.326 15:45:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.326 15:45:23 -- setup/common.sh@18 -- # local node= 00:02:44.326 15:45:23 -- setup/common.sh@19 -- # local var val 00:02:44.326 15:45:23 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.326 15:45:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.326 15:45:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.326 15:45:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.326 15:45:23 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.326 15:45:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170704864 kB' 'MemAvailable: 174528348 kB' 'Buffers: 3888 kB' 'Cached: 14430820 kB' 'SwapCached: 0 kB' 'Active: 11420028 kB' 'Inactive: 3663216 kB' 'Active(anon): 10362608 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651924 kB' 'Mapped: 246592 kB' 'Shmem: 9714072 kB' 'KReclaimable: 494124 kB' 'Slab: 1128308 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 634184 kB' 'KernelStack: 20688 kB' 'PageTables: 10224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11885096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316100 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.326 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.326 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.327 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.327 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.328 15:45:23 -- setup/common.sh@33 -- # echo 0 00:02:44.328 15:45:23 -- setup/common.sh@33 -- # return 0 00:02:44.328 15:45:23 -- setup/hugepages.sh@99 -- # surp=0 00:02:44.328 15:45:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.328 15:45:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.328 15:45:23 -- setup/common.sh@18 -- # local node= 00:02:44.328 15:45:23 -- setup/common.sh@19 -- # local var val 00:02:44.328 15:45:23 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.328 15:45:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.328 15:45:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.328 15:45:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.328 15:45:23 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.328 15:45:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170708408 kB' 'MemAvailable: 174531892 kB' 'Buffers: 3888 kB' 'Cached: 14430832 kB' 'SwapCached: 0 kB' 'Active: 11415808 kB' 'Inactive: 3663216 kB' 'Active(anon): 10358388 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647756 kB' 'Mapped: 246436 kB' 'Shmem: 9714084 kB' 'KReclaimable: 494124 kB' 'Slab: 1128308 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 634184 kB' 'KernelStack: 20672 kB' 'PageTables: 10692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11878816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316132 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.328 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.328 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.329 15:45:23 -- setup/common.sh@33 -- # echo 0 00:02:44.329 15:45:23 -- setup/common.sh@33 -- # return 0 00:02:44.329 15:45:23 -- setup/hugepages.sh@100 -- # resv=0 00:02:44.329 15:45:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.329 nr_hugepages=1024 00:02:44.329 15:45:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.329 resv_hugepages=0 00:02:44.329 15:45:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.329 surplus_hugepages=0 00:02:44.329 15:45:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.329 anon_hugepages=0 00:02:44.329 15:45:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.329 15:45:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.329 15:45:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.329 15:45:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.329 15:45:23 -- setup/common.sh@18 -- # local node= 00:02:44.329 15:45:23 -- setup/common.sh@19 -- # local var val 00:02:44.329 15:45:23 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.329 15:45:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.329 15:45:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.329 15:45:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.329 15:45:23 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.329 15:45:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170708804 kB' 'MemAvailable: 174532288 kB' 'Buffers: 3888 kB' 'Cached: 14430848 kB' 'SwapCached: 0 kB' 'Active: 11415932 kB' 'Inactive: 3663216 kB' 'Active(anon): 10358512 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647752 kB' 'Mapped: 246088 kB' 'Shmem: 9714100 kB' 'KReclaimable: 494124 kB' 'Slab: 1128276 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 634152 kB' 'KernelStack: 20768 kB' 'PageTables: 10876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11880336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316212 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.329 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.329 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.330 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.330 15:45:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.331 15:45:23 -- setup/common.sh@33 -- # echo 1024 00:02:44.331 15:45:23 -- setup/common.sh@33 -- # return 0 00:02:44.331 15:45:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.331 15:45:23 -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.331 15:45:23 -- setup/hugepages.sh@27 -- # local node 00:02:44.331 15:45:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.331 15:45:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.331 15:45:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.331 15:45:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.331 15:45:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.331 15:45:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.331 15:45:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.331 15:45:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.331 15:45:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.331 15:45:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.331 15:45:23 -- setup/common.sh@18 -- # local node=0 00:02:44.331 15:45:23 -- setup/common.sh@19 -- # local var val 00:02:44.331 15:45:23 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.331 15:45:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.331 15:45:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.331 15:45:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.331 15:45:23 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.331 15:45:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91942696 kB' 'MemUsed: 5672932 kB' 'SwapCached: 0 kB' 'Active: 3201908 kB' 'Inactive: 133364 kB' 'Active(anon): 2759316 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 133364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2884304 kB' 'Mapped: 101004 kB' 'AnonPages: 454220 kB' 'Shmem: 2308348 kB' 'KernelStack: 11368 kB' 'PageTables: 5724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 268388 kB' 'Slab: 572908 kB' 'SReclaimable: 268388 kB' 'SUnreclaim: 304520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.331 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.331 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@33 -- # echo 0 00:02:44.332 15:45:23 -- setup/common.sh@33 -- # return 0 00:02:44.332 15:45:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.332 15:45:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.332 15:45:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.332 15:45:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:44.332 15:45:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.332 15:45:23 -- setup/common.sh@18 -- # local node=1 00:02:44.332 15:45:23 -- setup/common.sh@19 -- # local var val 00:02:44.332 15:45:23 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.332 15:45:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.332 15:45:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:44.332 15:45:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:44.332 15:45:23 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.332 15:45:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 78765168 kB' 'MemUsed: 15000384 kB' 'SwapCached: 0 kB' 'Active: 8214088 kB' 'Inactive: 3529852 kB' 'Active(anon): 7599260 kB' 'Inactive(anon): 0 kB' 'Active(file): 614828 kB' 'Inactive(file): 3529852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11550448 kB' 'Mapped: 145084 kB' 'AnonPages: 193572 kB' 'Shmem: 7405768 kB' 'KernelStack: 9432 kB' 'PageTables: 4992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225736 kB' 'Slab: 555368 kB' 'SReclaimable: 225736 kB' 'SUnreclaim: 329632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.332 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.332 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # continue 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.333 15:45:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.333 15:45:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.333 15:45:23 -- setup/common.sh@33 -- # echo 0 00:02:44.333 15:45:23 -- setup/common.sh@33 -- # return 0 00:02:44.333 15:45:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.333 15:45:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.333 15:45:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.333 15:45:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.333 15:45:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:44.333 node0=512 expecting 512 00:02:44.333 15:45:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.333 15:45:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.333 15:45:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.333 15:45:23 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:44.333 node1=512 expecting 512 00:02:44.333 15:45:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:44.333 00:02:44.333 real 0m2.823s 00:02:44.333 user 0m1.148s 00:02:44.333 sys 0m1.734s 00:02:44.333 15:45:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:44.333 15:45:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.333 ************************************ 00:02:44.333 END TEST per_node_1G_alloc 00:02:44.333 ************************************ 00:02:44.333 15:45:23 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:44.333 15:45:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:44.333 15:45:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:44.333 15:45:23 -- common/autotest_common.sh@10 -- # set +x 00:02:44.593 ************************************ 00:02:44.593 START TEST even_2G_alloc 00:02:44.593 ************************************ 00:02:44.593 15:45:24 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:44.593 15:45:24 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:44.593 15:45:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:44.593 15:45:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:44.593 15:45:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.593 15:45:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:44.593 15:45:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:44.593 15:45:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:44.593 15:45:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.593 15:45:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:44.593 15:45:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.593 15:45:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.593 15:45:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.593 15:45:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:44.593 15:45:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:44.593 15:45:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.593 15:45:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:44.593 15:45:24 -- setup/hugepages.sh@83 -- # : 512 00:02:44.593 15:45:24 -- setup/hugepages.sh@84 -- # : 1 00:02:44.593 15:45:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.593 15:45:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:44.593 15:45:24 -- setup/hugepages.sh@83 -- # : 0 00:02:44.593 15:45:24 -- setup/hugepages.sh@84 -- # : 0 00:02:44.593 15:45:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.593 15:45:24 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:44.593 15:45:24 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:44.593 15:45:24 -- setup/hugepages.sh@153 -- # setup output 00:02:44.593 15:45:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.593 15:45:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.137 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:47.137 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:47.137 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:47.137 15:45:26 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:47.137 15:45:26 -- setup/hugepages.sh@89 -- # local node 00:02:47.137 15:45:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.137 15:45:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.137 15:45:26 -- setup/hugepages.sh@92 -- # local surp 00:02:47.137 15:45:26 -- setup/hugepages.sh@93 -- # local resv 00:02:47.137 15:45:26 -- setup/hugepages.sh@94 -- # local anon 00:02:47.137 15:45:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.137 15:45:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.137 15:45:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.137 15:45:26 -- setup/common.sh@18 -- # local node= 00:02:47.137 15:45:26 -- setup/common.sh@19 -- # local var val 00:02:47.137 15:45:26 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.137 15:45:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.137 15:45:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.137 15:45:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.137 15:45:26 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.137 15:45:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170753112 kB' 'MemAvailable: 174576596 kB' 'Buffers: 3888 kB' 'Cached: 14430928 kB' 'SwapCached: 0 kB' 'Active: 11414460 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357040 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646012 kB' 'Mapped: 245048 kB' 'Shmem: 9714180 kB' 'KReclaimable: 494124 kB' 'Slab: 1128032 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633908 kB' 'KernelStack: 20768 kB' 'PageTables: 10512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11865744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316308 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.137 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.137 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.138 15:45:26 -- setup/common.sh@33 -- # echo 0 00:02:47.138 15:45:26 -- setup/common.sh@33 -- # return 0 00:02:47.138 15:45:26 -- setup/hugepages.sh@97 -- # anon=0 00:02:47.138 15:45:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.138 15:45:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.138 15:45:26 -- setup/common.sh@18 -- # local node= 00:02:47.138 15:45:26 -- setup/common.sh@19 -- # local var val 00:02:47.138 15:45:26 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.138 15:45:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.138 15:45:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.138 15:45:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.138 15:45:26 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.138 15:45:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170750968 kB' 'MemAvailable: 174574452 kB' 'Buffers: 3888 kB' 'Cached: 14430932 kB' 'SwapCached: 0 kB' 'Active: 11413924 kB' 'Inactive: 3663216 kB' 'Active(anon): 10356504 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645588 kB' 'Mapped: 244972 kB' 'Shmem: 9714184 kB' 'KReclaimable: 494124 kB' 'Slab: 1128040 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633916 kB' 'KernelStack: 20736 kB' 'PageTables: 10420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11865756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316212 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.138 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.138 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.139 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.139 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.140 15:45:26 -- setup/common.sh@33 -- # echo 0 00:02:47.140 15:45:26 -- setup/common.sh@33 -- # return 0 00:02:47.140 15:45:26 -- setup/hugepages.sh@99 -- # surp=0 00:02:47.140 15:45:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.140 15:45:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.140 15:45:26 -- setup/common.sh@18 -- # local node= 00:02:47.140 15:45:26 -- setup/common.sh@19 -- # local var val 00:02:47.140 15:45:26 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.140 15:45:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.140 15:45:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.140 15:45:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.140 15:45:26 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.140 15:45:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170750576 kB' 'MemAvailable: 174574060 kB' 'Buffers: 3888 kB' 'Cached: 14430944 kB' 'SwapCached: 0 kB' 'Active: 11413724 kB' 'Inactive: 3663216 kB' 'Active(anon): 10356304 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645368 kB' 'Mapped: 244972 kB' 'Shmem: 9714196 kB' 'KReclaimable: 494124 kB' 'Slab: 1128040 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633916 kB' 'KernelStack: 20688 kB' 'PageTables: 10592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11865772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316228 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.140 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.140 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.141 15:45:26 -- setup/common.sh@33 -- # echo 0 00:02:47.141 15:45:26 -- setup/common.sh@33 -- # return 0 00:02:47.141 15:45:26 -- setup/hugepages.sh@100 -- # resv=0 00:02:47.141 15:45:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:47.141 nr_hugepages=1024 00:02:47.141 15:45:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:47.141 resv_hugepages=0 00:02:47.141 15:45:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:47.141 surplus_hugepages=0 00:02:47.141 15:45:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:47.141 anon_hugepages=0 00:02:47.141 15:45:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.141 15:45:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:47.141 15:45:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:47.141 15:45:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:47.141 15:45:26 -- setup/common.sh@18 -- # local node= 00:02:47.141 15:45:26 -- setup/common.sh@19 -- # local var val 00:02:47.141 15:45:26 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.141 15:45:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.141 15:45:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.141 15:45:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.141 15:45:26 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.141 15:45:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170750440 kB' 'MemAvailable: 174573924 kB' 'Buffers: 3888 kB' 'Cached: 14430944 kB' 'SwapCached: 0 kB' 'Active: 11414340 kB' 'Inactive: 3663216 kB' 'Active(anon): 10356920 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645984 kB' 'Mapped: 244972 kB' 'Shmem: 9714196 kB' 'KReclaimable: 494124 kB' 'Slab: 1128040 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633916 kB' 'KernelStack: 20784 kB' 'PageTables: 10656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11865784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316356 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.141 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.141 15:45:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.142 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.142 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.405 15:45:26 -- setup/common.sh@33 -- # echo 1024 00:02:47.405 15:45:26 -- setup/common.sh@33 -- # return 0 00:02:47.405 15:45:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.405 15:45:26 -- setup/hugepages.sh@112 -- # get_nodes 00:02:47.405 15:45:26 -- setup/hugepages.sh@27 -- # local node 00:02:47.405 15:45:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.405 15:45:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:47.405 15:45:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.405 15:45:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:47.405 15:45:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.405 15:45:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.405 15:45:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.405 15:45:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.405 15:45:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:47.405 15:45:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.405 15:45:26 -- setup/common.sh@18 -- # local node=0 00:02:47.405 15:45:26 -- setup/common.sh@19 -- # local var val 00:02:47.405 15:45:26 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.405 15:45:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.405 15:45:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:47.405 15:45:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:47.405 15:45:26 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.405 15:45:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91981704 kB' 'MemUsed: 5633924 kB' 'SwapCached: 0 kB' 'Active: 3202516 kB' 'Inactive: 133364 kB' 'Active(anon): 2759924 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 133364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2884404 kB' 'Mapped: 99860 kB' 'AnonPages: 454708 kB' 'Shmem: 2308448 kB' 'KernelStack: 11416 kB' 'PageTables: 5864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 268388 kB' 'Slab: 572828 kB' 'SReclaimable: 268388 kB' 'SUnreclaim: 304440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.405 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.405 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@33 -- # echo 0 00:02:47.406 15:45:26 -- setup/common.sh@33 -- # return 0 00:02:47.406 15:45:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.406 15:45:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.406 15:45:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.406 15:45:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:47.406 15:45:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.406 15:45:26 -- setup/common.sh@18 -- # local node=1 00:02:47.406 15:45:26 -- setup/common.sh@19 -- # local var val 00:02:47.406 15:45:26 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.406 15:45:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.406 15:45:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:47.406 15:45:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:47.406 15:45:26 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.406 15:45:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 78771400 kB' 'MemUsed: 14994152 kB' 'SwapCached: 0 kB' 'Active: 8211256 kB' 'Inactive: 3529852 kB' 'Active(anon): 7596428 kB' 'Inactive(anon): 0 kB' 'Active(file): 614828 kB' 'Inactive(file): 3529852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11550456 kB' 'Mapped: 145112 kB' 'AnonPages: 190736 kB' 'Shmem: 7405776 kB' 'KernelStack: 9176 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225736 kB' 'Slab: 555436 kB' 'SReclaimable: 225736 kB' 'SUnreclaim: 329700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.406 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.406 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # continue 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.407 15:45:26 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.407 15:45:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.407 15:45:26 -- setup/common.sh@33 -- # echo 0 00:02:47.407 15:45:26 -- setup/common.sh@33 -- # return 0 00:02:47.407 15:45:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.407 15:45:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.407 15:45:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.407 15:45:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.407 15:45:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:47.407 node0=512 expecting 512 00:02:47.407 15:45:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.407 15:45:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.407 15:45:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.407 15:45:26 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:47.407 node1=512 expecting 512 00:02:47.407 15:45:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:47.407 00:02:47.407 real 0m2.803s 00:02:47.407 user 0m1.144s 00:02:47.407 sys 0m1.686s 00:02:47.407 15:45:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:47.407 15:45:26 -- common/autotest_common.sh@10 -- # set +x 00:02:47.407 ************************************ 00:02:47.407 END TEST even_2G_alloc 00:02:47.407 ************************************ 00:02:47.407 15:45:26 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:47.407 15:45:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:47.407 15:45:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:47.407 15:45:26 -- common/autotest_common.sh@10 -- # set +x 00:02:47.407 ************************************ 00:02:47.407 START TEST odd_alloc 00:02:47.407 ************************************ 00:02:47.407 15:45:27 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:47.407 15:45:27 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:47.407 15:45:27 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:47.407 15:45:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:47.407 15:45:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.407 15:45:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:47.407 15:45:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:47.407 15:45:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:47.407 15:45:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.407 15:45:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:47.407 15:45:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.407 15:45:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.407 15:45:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.407 15:45:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:47.407 15:45:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:47.407 15:45:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.407 15:45:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:47.407 15:45:27 -- setup/hugepages.sh@83 -- # : 513 00:02:47.407 15:45:27 -- setup/hugepages.sh@84 -- # : 1 00:02:47.407 15:45:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.407 15:45:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:47.407 15:45:27 -- setup/hugepages.sh@83 -- # : 0 00:02:47.407 15:45:27 -- setup/hugepages.sh@84 -- # : 0 00:02:47.407 15:45:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.407 15:45:27 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:47.407 15:45:27 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:47.407 15:45:27 -- setup/hugepages.sh@160 -- # setup output 00:02:47.407 15:45:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.407 15:45:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.952 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.952 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:49.952 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:49.952 15:45:29 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:49.952 15:45:29 -- setup/hugepages.sh@89 -- # local node 00:02:49.952 15:45:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.952 15:45:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.952 15:45:29 -- setup/hugepages.sh@92 -- # local surp 00:02:49.952 15:45:29 -- setup/hugepages.sh@93 -- # local resv 00:02:49.952 15:45:29 -- setup/hugepages.sh@94 -- # local anon 00:02:49.952 15:45:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.952 15:45:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.952 15:45:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.952 15:45:29 -- setup/common.sh@18 -- # local node= 00:02:49.952 15:45:29 -- setup/common.sh@19 -- # local var val 00:02:49.952 15:45:29 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.952 15:45:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.952 15:45:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.952 15:45:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.952 15:45:29 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.952 15:45:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.952 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.952 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170767556 kB' 'MemAvailable: 174591040 kB' 'Buffers: 3888 kB' 'Cached: 14431048 kB' 'SwapCached: 0 kB' 'Active: 11415248 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357828 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646704 kB' 'Mapped: 245072 kB' 'Shmem: 9714300 kB' 'KReclaimable: 494124 kB' 'Slab: 1127696 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633572 kB' 'KernelStack: 20752 kB' 'PageTables: 10568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11866088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316340 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.953 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.953 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.954 15:45:29 -- setup/common.sh@33 -- # echo 0 00:02:49.954 15:45:29 -- setup/common.sh@33 -- # return 0 00:02:49.954 15:45:29 -- setup/hugepages.sh@97 -- # anon=0 00:02:49.954 15:45:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.954 15:45:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.954 15:45:29 -- setup/common.sh@18 -- # local node= 00:02:49.954 15:45:29 -- setup/common.sh@19 -- # local var val 00:02:49.954 15:45:29 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.954 15:45:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.954 15:45:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.954 15:45:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.954 15:45:29 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.954 15:45:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170767052 kB' 'MemAvailable: 174590536 kB' 'Buffers: 3888 kB' 'Cached: 14431052 kB' 'SwapCached: 0 kB' 'Active: 11414892 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357472 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646424 kB' 'Mapped: 245056 kB' 'Shmem: 9714304 kB' 'KReclaimable: 494124 kB' 'Slab: 1127680 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633556 kB' 'KernelStack: 20640 kB' 'PageTables: 10060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11866100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316308 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.954 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.954 15:45:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.955 15:45:29 -- setup/common.sh@33 -- # echo 0 00:02:49.955 15:45:29 -- setup/common.sh@33 -- # return 0 00:02:49.955 15:45:29 -- setup/hugepages.sh@99 -- # surp=0 00:02:49.955 15:45:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.955 15:45:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.955 15:45:29 -- setup/common.sh@18 -- # local node= 00:02:49.955 15:45:29 -- setup/common.sh@19 -- # local var val 00:02:49.955 15:45:29 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.955 15:45:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.955 15:45:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.955 15:45:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.955 15:45:29 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.955 15:45:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170766508 kB' 'MemAvailable: 174589992 kB' 'Buffers: 3888 kB' 'Cached: 14431052 kB' 'SwapCached: 0 kB' 'Active: 11414404 kB' 'Inactive: 3663216 kB' 'Active(anon): 10356984 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645888 kB' 'Mapped: 245056 kB' 'Shmem: 9714304 kB' 'KReclaimable: 494124 kB' 'Slab: 1127716 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633592 kB' 'KernelStack: 20688 kB' 'PageTables: 10544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11866116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316324 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.955 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.955 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.956 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.956 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.956 15:45:29 -- setup/common.sh@33 -- # echo 0 00:02:49.956 15:45:29 -- setup/common.sh@33 -- # return 0 00:02:49.956 15:45:29 -- setup/hugepages.sh@100 -- # resv=0 00:02:49.956 15:45:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:49.956 nr_hugepages=1025 00:02:49.956 15:45:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.956 resv_hugepages=0 00:02:49.956 15:45:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.956 surplus_hugepages=0 00:02:49.956 15:45:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.957 anon_hugepages=0 00:02:49.957 15:45:29 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:49.957 15:45:29 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:49.957 15:45:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.957 15:45:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.957 15:45:29 -- setup/common.sh@18 -- # local node= 00:02:49.957 15:45:29 -- setup/common.sh@19 -- # local var val 00:02:49.957 15:45:29 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.957 15:45:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.957 15:45:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.957 15:45:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.957 15:45:29 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.957 15:45:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170766036 kB' 'MemAvailable: 174589520 kB' 'Buffers: 3888 kB' 'Cached: 14431056 kB' 'SwapCached: 0 kB' 'Active: 11414748 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357328 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646208 kB' 'Mapped: 245056 kB' 'Shmem: 9714308 kB' 'KReclaimable: 494124 kB' 'Slab: 1127716 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633592 kB' 'KernelStack: 20656 kB' 'PageTables: 10096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11863460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316276 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.957 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.957 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.958 15:45:29 -- setup/common.sh@33 -- # echo 1025 00:02:49.958 15:45:29 -- setup/common.sh@33 -- # return 0 00:02:49.958 15:45:29 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:49.958 15:45:29 -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.958 15:45:29 -- setup/hugepages.sh@27 -- # local node 00:02:49.958 15:45:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.958 15:45:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.958 15:45:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.958 15:45:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:49.958 15:45:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.958 15:45:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.958 15:45:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.958 15:45:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.958 15:45:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.958 15:45:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.958 15:45:29 -- setup/common.sh@18 -- # local node=0 00:02:49.958 15:45:29 -- setup/common.sh@19 -- # local var val 00:02:49.958 15:45:29 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.958 15:45:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.958 15:45:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.958 15:45:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.958 15:45:29 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.958 15:45:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.958 15:45:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91963404 kB' 'MemUsed: 5652224 kB' 'SwapCached: 0 kB' 'Active: 3202232 kB' 'Inactive: 133364 kB' 'Active(anon): 2759640 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 133364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2884456 kB' 'Mapped: 99912 kB' 'AnonPages: 453776 kB' 'Shmem: 2308500 kB' 'KernelStack: 11320 kB' 'PageTables: 5576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 268388 kB' 'Slab: 572452 kB' 'SReclaimable: 268388 kB' 'SUnreclaim: 304064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.958 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.958 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@33 -- # echo 0 00:02:49.959 15:45:29 -- setup/common.sh@33 -- # return 0 00:02:49.959 15:45:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.959 15:45:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.959 15:45:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.959 15:45:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:49.959 15:45:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.959 15:45:29 -- setup/common.sh@18 -- # local node=1 00:02:49.959 15:45:29 -- setup/common.sh@19 -- # local var val 00:02:49.959 15:45:29 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.959 15:45:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.959 15:45:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:49.959 15:45:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:49.959 15:45:29 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.959 15:45:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 78804344 kB' 'MemUsed: 14961208 kB' 'SwapCached: 0 kB' 'Active: 8212008 kB' 'Inactive: 3529852 kB' 'Active(anon): 7597180 kB' 'Inactive(anon): 0 kB' 'Active(file): 614828 kB' 'Inactive(file): 3529852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11550488 kB' 'Mapped: 145144 kB' 'AnonPages: 191476 kB' 'Shmem: 7405808 kB' 'KernelStack: 9192 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225736 kB' 'Slab: 555200 kB' 'SReclaimable: 225736 kB' 'SUnreclaim: 329464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.959 15:45:29 -- setup/common.sh@32 -- # continue 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.959 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.220 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.220 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # continue 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.221 15:45:29 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.221 15:45:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.221 15:45:29 -- setup/common.sh@33 -- # echo 0 00:02:50.221 15:45:29 -- setup/common.sh@33 -- # return 0 00:02:50.221 15:45:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.221 15:45:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.221 15:45:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.221 15:45:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:50.221 node0=512 expecting 513 00:02:50.221 15:45:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.221 15:45:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.221 15:45:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.221 15:45:29 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:50.221 node1=513 expecting 512 00:02:50.221 15:45:29 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:50.221 00:02:50.221 real 0m2.625s 00:02:50.221 user 0m1.063s 00:02:50.221 sys 0m1.580s 00:02:50.221 15:45:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:50.221 15:45:29 -- common/autotest_common.sh@10 -- # set +x 00:02:50.221 ************************************ 00:02:50.221 END TEST odd_alloc 00:02:50.221 ************************************ 00:02:50.221 15:45:29 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:50.221 15:45:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:50.221 15:45:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:50.221 15:45:29 -- common/autotest_common.sh@10 -- # set +x 00:02:50.221 ************************************ 00:02:50.221 START TEST custom_alloc 00:02:50.221 ************************************ 00:02:50.221 15:45:29 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:50.221 15:45:29 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:50.221 15:45:29 -- setup/hugepages.sh@169 -- # local node 00:02:50.221 15:45:29 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:50.221 15:45:29 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:50.221 15:45:29 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:50.221 15:45:29 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:50.221 15:45:29 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:50.221 15:45:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:50.221 15:45:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.221 15:45:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.221 15:45:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.221 15:45:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:50.221 15:45:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.221 15:45:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.221 15:45:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.221 15:45:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:50.221 15:45:29 -- setup/hugepages.sh@83 -- # : 256 00:02:50.221 15:45:29 -- setup/hugepages.sh@84 -- # : 1 00:02:50.221 15:45:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:50.221 15:45:29 -- setup/hugepages.sh@83 -- # : 0 00:02:50.221 15:45:29 -- setup/hugepages.sh@84 -- # : 0 00:02:50.221 15:45:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:50.221 15:45:29 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:50.221 15:45:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:50.221 15:45:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:50.221 15:45:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.221 15:45:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.221 15:45:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.221 15:45:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.221 15:45:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.221 15:45:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.221 15:45:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.221 15:45:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.221 15:45:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:50.221 15:45:29 -- setup/hugepages.sh@78 -- # return 0 00:02:50.221 15:45:29 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:50.221 15:45:29 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:50.221 15:45:29 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:50.221 15:45:29 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:50.221 15:45:29 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:50.221 15:45:29 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:50.221 15:45:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.221 15:45:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.221 15:45:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.221 15:45:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.221 15:45:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.221 15:45:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.221 15:45:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:50.221 15:45:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.221 15:45:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:50.221 15:45:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.221 15:45:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:50.221 15:45:29 -- setup/hugepages.sh@78 -- # return 0 00:02:50.221 15:45:29 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:50.221 15:45:29 -- setup/hugepages.sh@187 -- # setup output 00:02:50.221 15:45:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.221 15:45:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.865 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.865 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:52.865 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:52.865 15:45:32 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:52.865 15:45:32 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:52.865 15:45:32 -- setup/hugepages.sh@89 -- # local node 00:02:52.865 15:45:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.865 15:45:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.865 15:45:32 -- setup/hugepages.sh@92 -- # local surp 00:02:52.865 15:45:32 -- setup/hugepages.sh@93 -- # local resv 00:02:52.865 15:45:32 -- setup/hugepages.sh@94 -- # local anon 00:02:52.865 15:45:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.865 15:45:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.865 15:45:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.865 15:45:32 -- setup/common.sh@18 -- # local node= 00:02:52.865 15:45:32 -- setup/common.sh@19 -- # local var val 00:02:52.865 15:45:32 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.865 15:45:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.865 15:45:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.865 15:45:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.865 15:45:32 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.865 15:45:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169718416 kB' 'MemAvailable: 173541900 kB' 'Buffers: 3888 kB' 'Cached: 14431164 kB' 'SwapCached: 0 kB' 'Active: 11415176 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357756 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646284 kB' 'Mapped: 245128 kB' 'Shmem: 9714416 kB' 'KReclaimable: 494124 kB' 'Slab: 1127144 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633020 kB' 'KernelStack: 20592 kB' 'PageTables: 10164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11864224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316116 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.865 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.865 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.866 15:45:32 -- setup/common.sh@33 -- # echo 0 00:02:52.866 15:45:32 -- setup/common.sh@33 -- # return 0 00:02:52.866 15:45:32 -- setup/hugepages.sh@97 -- # anon=0 00:02:52.866 15:45:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.866 15:45:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.866 15:45:32 -- setup/common.sh@18 -- # local node= 00:02:52.866 15:45:32 -- setup/common.sh@19 -- # local var val 00:02:52.866 15:45:32 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.866 15:45:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.866 15:45:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.866 15:45:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.866 15:45:32 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.866 15:45:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169718420 kB' 'MemAvailable: 173541904 kB' 'Buffers: 3888 kB' 'Cached: 14431168 kB' 'SwapCached: 0 kB' 'Active: 11414440 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357020 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645992 kB' 'Mapped: 245032 kB' 'Shmem: 9714420 kB' 'KReclaimable: 494124 kB' 'Slab: 1127136 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633012 kB' 'KernelStack: 20592 kB' 'PageTables: 10164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11864236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316100 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.866 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.866 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.867 15:45:32 -- setup/common.sh@32 -- # continue 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.867 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.130 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.130 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.131 15:45:32 -- setup/common.sh@33 -- # echo 0 00:02:53.131 15:45:32 -- setup/common.sh@33 -- # return 0 00:02:53.131 15:45:32 -- setup/hugepages.sh@99 -- # surp=0 00:02:53.131 15:45:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.131 15:45:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.131 15:45:32 -- setup/common.sh@18 -- # local node= 00:02:53.131 15:45:32 -- setup/common.sh@19 -- # local var val 00:02:53.131 15:45:32 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.131 15:45:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.131 15:45:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.131 15:45:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.131 15:45:32 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.131 15:45:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169719848 kB' 'MemAvailable: 173543332 kB' 'Buffers: 3888 kB' 'Cached: 14431180 kB' 'SwapCached: 0 kB' 'Active: 11415252 kB' 'Inactive: 3663216 kB' 'Active(anon): 10357832 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646796 kB' 'Mapped: 245032 kB' 'Shmem: 9714432 kB' 'KReclaimable: 494124 kB' 'Slab: 1127136 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633012 kB' 'KernelStack: 20624 kB' 'PageTables: 10296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11863884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316084 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.131 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.131 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.131 15:45:32 -- setup/common.sh@33 -- # echo 0 00:02:53.131 15:45:32 -- setup/common.sh@33 -- # return 0 00:02:53.131 15:45:32 -- setup/hugepages.sh@100 -- # resv=0 00:02:53.131 15:45:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:53.131 nr_hugepages=1536 00:02:53.131 15:45:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.131 resv_hugepages=0 00:02:53.131 15:45:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.131 surplus_hugepages=0 00:02:53.131 15:45:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.131 anon_hugepages=0 00:02:53.131 15:45:32 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:53.131 15:45:32 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:53.131 15:45:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.131 15:45:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.131 15:45:32 -- setup/common.sh@18 -- # local node= 00:02:53.131 15:45:32 -- setup/common.sh@19 -- # local var val 00:02:53.131 15:45:32 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.131 15:45:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.131 15:45:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.132 15:45:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.132 15:45:32 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.132 15:45:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.132 15:45:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169720248 kB' 'MemAvailable: 173543732 kB' 'Buffers: 3888 kB' 'Cached: 14431204 kB' 'SwapCached: 0 kB' 'Active: 11414296 kB' 'Inactive: 3663216 kB' 'Active(anon): 10356876 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645732 kB' 'Mapped: 245028 kB' 'Shmem: 9714456 kB' 'KReclaimable: 494124 kB' 'Slab: 1127128 kB' 'SReclaimable: 494124 kB' 'SUnreclaim: 633004 kB' 'KernelStack: 20544 kB' 'PageTables: 9972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11863896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316036 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.132 15:45:32 -- setup/common.sh@33 -- # echo 1536 00:02:53.132 15:45:32 -- setup/common.sh@33 -- # return 0 00:02:53.132 15:45:32 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:53.132 15:45:32 -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.132 15:45:32 -- setup/hugepages.sh@27 -- # local node 00:02:53.132 15:45:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.132 15:45:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.132 15:45:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.132 15:45:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:53.132 15:45:32 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.132 15:45:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.132 15:45:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.132 15:45:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.132 15:45:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.132 15:45:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.132 15:45:32 -- setup/common.sh@18 -- # local node=0 00:02:53.132 15:45:32 -- setup/common.sh@19 -- # local var val 00:02:53.132 15:45:32 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.132 15:45:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.132 15:45:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.132 15:45:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.132 15:45:32 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.132 15:45:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91962180 kB' 'MemUsed: 5653448 kB' 'SwapCached: 0 kB' 'Active: 3203228 kB' 'Inactive: 133364 kB' 'Active(anon): 2760636 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 133364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2884596 kB' 'Mapped: 99864 kB' 'AnonPages: 455204 kB' 'Shmem: 2308640 kB' 'KernelStack: 11384 kB' 'PageTables: 5760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 268388 kB' 'Slab: 571756 kB' 'SReclaimable: 268388 kB' 'SUnreclaim: 303368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.132 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.132 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@33 -- # echo 0 00:02:53.133 15:45:32 -- setup/common.sh@33 -- # return 0 00:02:53.133 15:45:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.133 15:45:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.133 15:45:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.133 15:45:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.133 15:45:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.133 15:45:32 -- setup/common.sh@18 -- # local node=1 00:02:53.133 15:45:32 -- setup/common.sh@19 -- # local var val 00:02:53.133 15:45:32 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.133 15:45:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.133 15:45:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.133 15:45:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.133 15:45:32 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.133 15:45:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 77758068 kB' 'MemUsed: 16007484 kB' 'SwapCached: 0 kB' 'Active: 8211652 kB' 'Inactive: 3529852 kB' 'Active(anon): 7596824 kB' 'Inactive(anon): 0 kB' 'Active(file): 614828 kB' 'Inactive(file): 3529852 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11550508 kB' 'Mapped: 145164 kB' 'AnonPages: 191112 kB' 'Shmem: 7405828 kB' 'KernelStack: 9208 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225736 kB' 'Slab: 555372 kB' 'SReclaimable: 225736 kB' 'SUnreclaim: 329636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # continue 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.133 15:45:32 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.133 15:45:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.133 15:45:32 -- setup/common.sh@33 -- # echo 0 00:02:53.133 15:45:32 -- setup/common.sh@33 -- # return 0 00:02:53.133 15:45:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.133 15:45:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.133 15:45:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.133 15:45:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.133 15:45:32 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:53.133 node0=512 expecting 512 00:02:53.133 15:45:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.133 15:45:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.133 15:45:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.133 15:45:32 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:53.133 node1=1024 expecting 1024 00:02:53.133 15:45:32 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:53.133 00:02:53.133 real 0m2.879s 00:02:53.133 user 0m1.232s 00:02:53.133 sys 0m1.705s 00:02:53.133 15:45:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:53.133 15:45:32 -- common/autotest_common.sh@10 -- # set +x 00:02:53.134 ************************************ 00:02:53.134 END TEST custom_alloc 00:02:53.134 ************************************ 00:02:53.134 15:45:32 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:53.134 15:45:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:53.134 15:45:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:53.134 15:45:32 -- common/autotest_common.sh@10 -- # set +x 00:02:53.393 ************************************ 00:02:53.393 START TEST no_shrink_alloc 00:02:53.393 ************************************ 00:02:53.393 15:45:32 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:53.393 15:45:32 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:53.393 15:45:32 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:53.393 15:45:32 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:53.393 15:45:32 -- setup/hugepages.sh@51 -- # shift 00:02:53.393 15:45:32 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:53.393 15:45:32 -- setup/hugepages.sh@52 -- # local node_ids 00:02:53.393 15:45:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.393 15:45:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:53.393 15:45:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:53.393 15:45:32 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:53.393 15:45:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.393 15:45:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.393 15:45:32 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.393 15:45:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.393 15:45:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.393 15:45:32 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:53.393 15:45:32 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:53.393 15:45:32 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:53.393 15:45:32 -- setup/hugepages.sh@73 -- # return 0 00:02:53.393 15:45:32 -- setup/hugepages.sh@198 -- # setup output 00:02:53.393 15:45:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.393 15:45:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:55.933 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:55.933 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:55.933 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:55.933 15:45:35 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:55.933 15:45:35 -- setup/hugepages.sh@89 -- # local node 00:02:55.933 15:45:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.933 15:45:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.933 15:45:35 -- setup/hugepages.sh@92 -- # local surp 00:02:55.933 15:45:35 -- setup/hugepages.sh@93 -- # local resv 00:02:55.933 15:45:35 -- setup/hugepages.sh@94 -- # local anon 00:02:55.933 15:45:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.933 15:45:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.933 15:45:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.933 15:45:35 -- setup/common.sh@18 -- # local node= 00:02:55.933 15:45:35 -- setup/common.sh@19 -- # local var val 00:02:55.933 15:45:35 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.933 15:45:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.933 15:45:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.933 15:45:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.933 15:45:35 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.933 15:45:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170817288 kB' 'MemAvailable: 174640756 kB' 'Buffers: 3888 kB' 'Cached: 14431284 kB' 'SwapCached: 0 kB' 'Active: 11413772 kB' 'Inactive: 3663216 kB' 'Active(anon): 10356352 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644976 kB' 'Mapped: 245592 kB' 'Shmem: 9714536 kB' 'KReclaimable: 494092 kB' 'Slab: 1127320 kB' 'SReclaimable: 494092 kB' 'SUnreclaim: 633228 kB' 'KernelStack: 20576 kB' 'PageTables: 10104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11866440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316084 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.933 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.933 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.934 15:45:35 -- setup/common.sh@33 -- # echo 0 00:02:55.934 15:45:35 -- setup/common.sh@33 -- # return 0 00:02:55.934 15:45:35 -- setup/hugepages.sh@97 -- # anon=0 00:02:55.934 15:45:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.934 15:45:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.934 15:45:35 -- setup/common.sh@18 -- # local node= 00:02:55.934 15:45:35 -- setup/common.sh@19 -- # local var val 00:02:55.934 15:45:35 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.934 15:45:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.934 15:45:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.934 15:45:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.934 15:45:35 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.934 15:45:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170809664 kB' 'MemAvailable: 174633132 kB' 'Buffers: 3888 kB' 'Cached: 14431288 kB' 'SwapCached: 0 kB' 'Active: 11417668 kB' 'Inactive: 3663216 kB' 'Active(anon): 10360248 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649460 kB' 'Mapped: 245568 kB' 'Shmem: 9714540 kB' 'KReclaimable: 494092 kB' 'Slab: 1127312 kB' 'SReclaimable: 494092 kB' 'SUnreclaim: 633220 kB' 'KernelStack: 20544 kB' 'PageTables: 10056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11870556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316056 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.934 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.934 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.935 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.935 15:45:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.935 15:45:35 -- setup/common.sh@32 -- # continue 00:02:55.935 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.935 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.935 15:45:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.197 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.197 15:45:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.198 15:45:35 -- setup/common.sh@33 -- # echo 0 00:02:56.198 15:45:35 -- setup/common.sh@33 -- # return 0 00:02:56.198 15:45:35 -- setup/hugepages.sh@99 -- # surp=0 00:02:56.198 15:45:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.198 15:45:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.198 15:45:35 -- setup/common.sh@18 -- # local node= 00:02:56.198 15:45:35 -- setup/common.sh@19 -- # local var val 00:02:56.198 15:45:35 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.198 15:45:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.198 15:45:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.198 15:45:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.198 15:45:35 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.198 15:45:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170818648 kB' 'MemAvailable: 174642116 kB' 'Buffers: 3888 kB' 'Cached: 14431300 kB' 'SwapCached: 0 kB' 'Active: 11412896 kB' 'Inactive: 3663216 kB' 'Active(anon): 10355476 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644208 kB' 'Mapped: 245428 kB' 'Shmem: 9714552 kB' 'KReclaimable: 494092 kB' 'Slab: 1127296 kB' 'SReclaimable: 494092 kB' 'SUnreclaim: 633204 kB' 'KernelStack: 20560 kB' 'PageTables: 10108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11864452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316052 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.198 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.198 15:45:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.199 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.199 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.200 15:45:35 -- setup/common.sh@33 -- # echo 0 00:02:56.200 15:45:35 -- setup/common.sh@33 -- # return 0 00:02:56.200 15:45:35 -- setup/hugepages.sh@100 -- # resv=0 00:02:56.200 15:45:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.200 nr_hugepages=1024 00:02:56.200 15:45:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.200 resv_hugepages=0 00:02:56.200 15:45:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.200 surplus_hugepages=0 00:02:56.200 15:45:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.200 anon_hugepages=0 00:02:56.200 15:45:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.200 15:45:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.200 15:45:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.200 15:45:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.200 15:45:35 -- setup/common.sh@18 -- # local node= 00:02:56.200 15:45:35 -- setup/common.sh@19 -- # local var val 00:02:56.200 15:45:35 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.200 15:45:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.200 15:45:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.200 15:45:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.200 15:45:35 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.200 15:45:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170818788 kB' 'MemAvailable: 174642256 kB' 'Buffers: 3888 kB' 'Cached: 14431300 kB' 'SwapCached: 0 kB' 'Active: 11412708 kB' 'Inactive: 3663216 kB' 'Active(anon): 10355288 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644028 kB' 'Mapped: 245064 kB' 'Shmem: 9714552 kB' 'KReclaimable: 494092 kB' 'Slab: 1127296 kB' 'SReclaimable: 494092 kB' 'SUnreclaim: 633204 kB' 'KernelStack: 20544 kB' 'PageTables: 10040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11864468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316052 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.200 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.200 15:45:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.201 15:45:35 -- setup/common.sh@33 -- # echo 1024 00:02:56.201 15:45:35 -- setup/common.sh@33 -- # return 0 00:02:56.201 15:45:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.201 15:45:35 -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.201 15:45:35 -- setup/hugepages.sh@27 -- # local node 00:02:56.201 15:45:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.201 15:45:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:56.201 15:45:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.201 15:45:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:56.201 15:45:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.201 15:45:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.201 15:45:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.201 15:45:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.201 15:45:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.201 15:45:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.201 15:45:35 -- setup/common.sh@18 -- # local node=0 00:02:56.201 15:45:35 -- setup/common.sh@19 -- # local var val 00:02:56.201 15:45:35 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.201 15:45:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.201 15:45:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.201 15:45:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.201 15:45:35 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.201 15:45:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.201 15:45:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90950964 kB' 'MemUsed: 6664664 kB' 'SwapCached: 0 kB' 'Active: 3201336 kB' 'Inactive: 133364 kB' 'Active(anon): 2758744 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 133364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2884684 kB' 'Mapped: 99868 kB' 'AnonPages: 453192 kB' 'Shmem: 2308728 kB' 'KernelStack: 11336 kB' 'PageTables: 5684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 268388 kB' 'Slab: 572092 kB' 'SReclaimable: 268388 kB' 'SUnreclaim: 303704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.201 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.201 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # continue 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.202 15:45:35 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.202 15:45:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.202 15:45:35 -- setup/common.sh@33 -- # echo 0 00:02:56.202 15:45:35 -- setup/common.sh@33 -- # return 0 00:02:56.202 15:45:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.202 15:45:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.202 15:45:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.202 15:45:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.202 15:45:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:56.202 node0=1024 expecting 1024 00:02:56.202 15:45:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:56.202 15:45:35 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:56.202 15:45:35 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:56.202 15:45:35 -- setup/hugepages.sh@202 -- # setup output 00:02:56.202 15:45:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.202 15:45:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:58.744 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.744 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.744 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.744 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:58.744 15:45:38 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:58.744 15:45:38 -- setup/hugepages.sh@89 -- # local node 00:02:58.744 15:45:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:58.744 15:45:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:58.744 15:45:38 -- setup/hugepages.sh@92 -- # local surp 00:02:58.744 15:45:38 -- setup/hugepages.sh@93 -- # local resv 00:02:58.744 15:45:38 -- setup/hugepages.sh@94 -- # local anon 00:02:58.744 15:45:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:58.744 15:45:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:58.744 15:45:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:58.744 15:45:38 -- setup/common.sh@18 -- # local node= 00:02:58.744 15:45:38 -- setup/common.sh@19 -- # local var val 00:02:58.744 15:45:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.744 15:45:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.744 15:45:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.744 15:45:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.744 15:45:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.744 15:45:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.744 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.744 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.744 15:45:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170825656 kB' 'MemAvailable: 174649124 kB' 'Buffers: 3888 kB' 'Cached: 14431384 kB' 'SwapCached: 0 kB' 'Active: 11413056 kB' 'Inactive: 3663216 kB' 'Active(anon): 10355636 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644244 kB' 'Mapped: 245148 kB' 'Shmem: 9714636 kB' 'KReclaimable: 494092 kB' 'Slab: 1127980 kB' 'SReclaimable: 494092 kB' 'SUnreclaim: 633888 kB' 'KernelStack: 20576 kB' 'PageTables: 10092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11865396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316100 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:58.744 15:45:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.744 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.744 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.744 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.744 15:45:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.744 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.744 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.744 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.744 15:45:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.745 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.745 15:45:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:58.745 15:45:38 -- setup/common.sh@33 -- # echo 0 00:02:58.745 15:45:38 -- setup/common.sh@33 -- # return 0 00:02:58.745 15:45:38 -- setup/hugepages.sh@97 -- # anon=0 00:02:58.745 15:45:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:58.745 15:45:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:58.745 15:45:38 -- setup/common.sh@18 -- # local node= 00:02:58.745 15:45:38 -- setup/common.sh@19 -- # local var val 00:02:58.745 15:45:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.746 15:45:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.746 15:45:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.746 15:45:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.746 15:45:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.746 15:45:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170826564 kB' 'MemAvailable: 174650032 kB' 'Buffers: 3888 kB' 'Cached: 14431388 kB' 'SwapCached: 0 kB' 'Active: 11413276 kB' 'Inactive: 3663216 kB' 'Active(anon): 10355856 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644540 kB' 'Mapped: 245072 kB' 'Shmem: 9714640 kB' 'KReclaimable: 494092 kB' 'Slab: 1127980 kB' 'SReclaimable: 494092 kB' 'SUnreclaim: 633888 kB' 'KernelStack: 20576 kB' 'PageTables: 10088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11865408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316052 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.746 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.746 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:58.747 15:45:38 -- setup/common.sh@33 -- # echo 0 00:02:58.747 15:45:38 -- setup/common.sh@33 -- # return 0 00:02:58.747 15:45:38 -- setup/hugepages.sh@99 -- # surp=0 00:02:58.747 15:45:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:58.747 15:45:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:58.747 15:45:38 -- setup/common.sh@18 -- # local node= 00:02:58.747 15:45:38 -- setup/common.sh@19 -- # local var val 00:02:58.747 15:45:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:58.747 15:45:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:58.747 15:45:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:58.747 15:45:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:58.747 15:45:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:58.747 15:45:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170826780 kB' 'MemAvailable: 174650248 kB' 'Buffers: 3888 kB' 'Cached: 14431400 kB' 'SwapCached: 0 kB' 'Active: 11413276 kB' 'Inactive: 3663216 kB' 'Active(anon): 10355856 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644520 kB' 'Mapped: 245072 kB' 'Shmem: 9714652 kB' 'KReclaimable: 494092 kB' 'Slab: 1127948 kB' 'SReclaimable: 494092 kB' 'SUnreclaim: 633856 kB' 'KernelStack: 20560 kB' 'PageTables: 10024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11865424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316068 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.747 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.747 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # continue 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:58.748 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:58.748 15:45:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.009 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.009 15:45:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.010 15:45:38 -- setup/common.sh@33 -- # echo 0 00:02:59.010 15:45:38 -- setup/common.sh@33 -- # return 0 00:02:59.010 15:45:38 -- setup/hugepages.sh@100 -- # resv=0 00:02:59.010 15:45:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:59.010 nr_hugepages=1024 00:02:59.010 15:45:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.010 resv_hugepages=0 00:02:59.010 15:45:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.010 surplus_hugepages=0 00:02:59.010 15:45:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.010 anon_hugepages=0 00:02:59.010 15:45:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.010 15:45:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:59.010 15:45:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.010 15:45:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.010 15:45:38 -- setup/common.sh@18 -- # local node= 00:02:59.010 15:45:38 -- setup/common.sh@19 -- # local var val 00:02:59.010 15:45:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.010 15:45:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.010 15:45:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.010 15:45:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.010 15:45:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.010 15:45:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 170827304 kB' 'MemAvailable: 174650772 kB' 'Buffers: 3888 kB' 'Cached: 14431412 kB' 'SwapCached: 0 kB' 'Active: 11413300 kB' 'Inactive: 3663216 kB' 'Active(anon): 10355880 kB' 'Inactive(anon): 0 kB' 'Active(file): 1057420 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644520 kB' 'Mapped: 245072 kB' 'Shmem: 9714664 kB' 'KReclaimable: 494092 kB' 'Slab: 1127948 kB' 'SReclaimable: 494092 kB' 'SUnreclaim: 633856 kB' 'KernelStack: 20560 kB' 'PageTables: 10024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11865440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316068 kB' 'VmallocChunk: 0 kB' 'Percpu: 109440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3382228 kB' 'DirectMap2M: 28803072 kB' 'DirectMap1G: 169869312 kB' 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.010 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.010 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.011 15:45:38 -- setup/common.sh@33 -- # echo 1024 00:02:59.011 15:45:38 -- setup/common.sh@33 -- # return 0 00:02:59.011 15:45:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.011 15:45:38 -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.011 15:45:38 -- setup/hugepages.sh@27 -- # local node 00:02:59.011 15:45:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.011 15:45:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:59.011 15:45:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.011 15:45:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:59.011 15:45:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.011 15:45:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.011 15:45:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.011 15:45:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.011 15:45:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.011 15:45:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.011 15:45:38 -- setup/common.sh@18 -- # local node=0 00:02:59.011 15:45:38 -- setup/common.sh@19 -- # local var val 00:02:59.011 15:45:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.011 15:45:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.011 15:45:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.011 15:45:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.011 15:45:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.011 15:45:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.011 15:45:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90949192 kB' 'MemUsed: 6666436 kB' 'SwapCached: 0 kB' 'Active: 3203040 kB' 'Inactive: 133364 kB' 'Active(anon): 2760448 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 133364 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2884764 kB' 'Mapped: 99868 kB' 'AnonPages: 454872 kB' 'Shmem: 2308808 kB' 'KernelStack: 11400 kB' 'PageTables: 5772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 268388 kB' 'Slab: 572616 kB' 'SReclaimable: 268388 kB' 'SUnreclaim: 304228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.011 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.011 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # continue 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.012 15:45:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.012 15:45:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.012 15:45:38 -- setup/common.sh@33 -- # echo 0 00:02:59.012 15:45:38 -- setup/common.sh@33 -- # return 0 00:02:59.012 15:45:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.012 15:45:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.012 15:45:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.012 15:45:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.012 15:45:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:59.012 node0=1024 expecting 1024 00:02:59.012 15:45:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:59.012 00:02:59.012 real 0m5.649s 00:02:59.012 user 0m2.255s 00:02:59.012 sys 0m3.479s 00:02:59.012 15:45:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.012 15:45:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.012 ************************************ 00:02:59.012 END TEST no_shrink_alloc 00:02:59.012 ************************************ 00:02:59.012 15:45:38 -- setup/hugepages.sh@217 -- # clear_hp 00:02:59.012 15:45:38 -- setup/hugepages.sh@37 -- # local node hp 00:02:59.012 15:45:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.012 15:45:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.012 15:45:38 -- setup/hugepages.sh@41 -- # echo 0 00:02:59.012 15:45:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.012 15:45:38 -- setup/hugepages.sh@41 -- # echo 0 00:02:59.012 15:45:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.012 15:45:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.012 15:45:38 -- setup/hugepages.sh@41 -- # echo 0 00:02:59.012 15:45:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.013 15:45:38 -- setup/hugepages.sh@41 -- # echo 0 00:02:59.013 15:45:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:59.013 15:45:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:59.013 00:02:59.013 real 0m21.751s 00:02:59.013 user 0m8.539s 00:02:59.013 sys 0m12.682s 00:02:59.013 15:45:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.013 15:45:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.013 ************************************ 00:02:59.013 END TEST hugepages 00:02:59.013 ************************************ 00:02:59.013 15:45:38 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:59.013 15:45:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.013 15:45:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.013 15:45:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.013 ************************************ 00:02:59.013 START TEST driver 00:02:59.013 ************************************ 00:02:59.013 15:45:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:59.272 * Looking for test storage... 00:02:59.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.272 15:45:38 -- setup/driver.sh@68 -- # setup reset 00:02:59.272 15:45:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.272 15:45:38 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.464 15:45:42 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:03.464 15:45:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:03.464 15:45:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:03.464 15:45:42 -- common/autotest_common.sh@10 -- # set +x 00:03:03.464 ************************************ 00:03:03.464 START TEST guess_driver 00:03:03.464 ************************************ 00:03:03.464 15:45:42 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:03.464 15:45:42 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:03.464 15:45:42 -- setup/driver.sh@47 -- # local fail=0 00:03:03.464 15:45:42 -- setup/driver.sh@49 -- # pick_driver 00:03:03.464 15:45:42 -- setup/driver.sh@36 -- # vfio 00:03:03.464 15:45:42 -- setup/driver.sh@21 -- # local iommu_grups 00:03:03.464 15:45:42 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:03.464 15:45:42 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:03.464 15:45:42 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:03.464 15:45:42 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:03.464 15:45:42 -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:03.464 15:45:42 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:03.464 15:45:42 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:03.464 15:45:42 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:03.464 15:45:42 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:03.464 15:45:42 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:03.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:03.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:03.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:03.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:03.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:03.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:03.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:03.464 15:45:42 -- setup/driver.sh@30 -- # return 0 00:03:03.464 15:45:42 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:03.464 15:45:42 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:03.464 15:45:42 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:03.464 15:45:42 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:03.464 Looking for driver=vfio-pci 00:03:03.464 15:45:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:03.464 15:45:42 -- setup/driver.sh@45 -- # setup output config 00:03:03.464 15:45:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.464 15:45:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.365 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.365 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.365 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:05.625 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:05.625 15:45:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:05.625 15:45:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:06.561 15:45:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:06.561 15:45:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:06.561 15:45:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:06.561 15:45:46 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:06.561 15:45:46 -- setup/driver.sh@65 -- # setup reset 00:03:06.561 15:45:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.561 15:45:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.754 00:03:10.754 real 0m7.138s 00:03:10.754 user 0m1.953s 00:03:10.754 sys 0m3.605s 00:03:10.754 15:45:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:10.754 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:03:10.754 ************************************ 00:03:10.754 END TEST guess_driver 00:03:10.754 ************************************ 00:03:10.754 00:03:10.754 real 0m11.325s 00:03:10.754 user 0m3.138s 00:03:10.754 sys 0m5.841s 00:03:10.754 15:45:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:10.754 15:45:50 -- common/autotest_common.sh@10 -- # set +x 00:03:10.754 ************************************ 00:03:10.754 END TEST driver 00:03:10.754 ************************************ 00:03:10.754 15:45:50 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:10.754 15:45:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:10.754 15:45:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:10.754 15:45:50 -- common/autotest_common.sh@10 -- # set +x 00:03:10.754 ************************************ 00:03:10.754 START TEST devices 00:03:10.754 ************************************ 00:03:10.754 15:45:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:10.754 * Looking for test storage... 00:03:10.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:10.754 15:45:50 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:10.754 15:45:50 -- setup/devices.sh@192 -- # setup reset 00:03:10.754 15:45:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.754 15:45:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.039 15:45:53 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:14.039 15:45:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:14.039 15:45:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:14.039 15:45:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:14.039 15:45:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:14.039 15:45:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:14.039 15:45:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:14.039 15:45:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.039 15:45:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:14.039 15:45:53 -- setup/devices.sh@196 -- # blocks=() 00:03:14.039 15:45:53 -- setup/devices.sh@196 -- # declare -a blocks 00:03:14.039 15:45:53 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:14.039 15:45:53 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:14.039 15:45:53 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:14.039 15:45:53 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:14.039 15:45:53 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:14.039 15:45:53 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:14.039 15:45:53 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:14.039 15:45:53 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:14.039 15:45:53 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:14.039 15:45:53 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:14.039 15:45:53 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:14.039 No valid GPT data, bailing 00:03:14.039 15:45:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.039 15:45:53 -- scripts/common.sh@391 -- # pt= 00:03:14.039 15:45:53 -- scripts/common.sh@392 -- # return 1 00:03:14.039 15:45:53 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:14.039 15:45:53 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:14.039 15:45:53 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:14.039 15:45:53 -- setup/common.sh@80 -- # echo 1000204886016 00:03:14.039 15:45:53 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:14.039 15:45:53 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:14.039 15:45:53 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:14.039 15:45:53 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:14.039 15:45:53 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:14.039 15:45:53 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:14.039 15:45:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:14.039 15:45:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:14.039 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:03:14.039 ************************************ 00:03:14.039 START TEST nvme_mount 00:03:14.039 ************************************ 00:03:14.039 15:45:53 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:14.039 15:45:53 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:14.039 15:45:53 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:14.039 15:45:53 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:14.039 15:45:53 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:14.039 15:45:53 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:14.039 15:45:53 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:14.039 15:45:53 -- setup/common.sh@40 -- # local part_no=1 00:03:14.039 15:45:53 -- setup/common.sh@41 -- # local size=1073741824 00:03:14.039 15:45:53 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:14.039 15:45:53 -- setup/common.sh@44 -- # parts=() 00:03:14.039 15:45:53 -- setup/common.sh@44 -- # local parts 00:03:14.039 15:45:53 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:14.039 15:45:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:14.039 15:45:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:14.039 15:45:53 -- setup/common.sh@46 -- # (( part++ )) 00:03:14.039 15:45:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:14.039 15:45:53 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:14.039 15:45:53 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:14.039 15:45:53 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:14.976 Creating new GPT entries in memory. 00:03:14.976 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:14.976 other utilities. 00:03:14.976 15:45:54 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:14.976 15:45:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:14.976 15:45:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:14.976 15:45:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:14.976 15:45:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:15.914 Creating new GPT entries in memory. 00:03:15.914 The operation has completed successfully. 00:03:15.914 15:45:55 -- setup/common.sh@57 -- # (( part++ )) 00:03:15.914 15:45:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:15.914 15:45:55 -- setup/common.sh@62 -- # wait 2238807 00:03:15.914 15:45:55 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.914 15:45:55 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:15.914 15:45:55 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.914 15:45:55 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:15.914 15:45:55 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:15.914 15:45:55 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.914 15:45:55 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:15.914 15:45:55 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:15.914 15:45:55 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:15.914 15:45:55 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.914 15:45:55 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:15.914 15:45:55 -- setup/devices.sh@53 -- # local found=0 00:03:15.914 15:45:55 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:15.914 15:45:55 -- setup/devices.sh@56 -- # : 00:03:15.914 15:45:55 -- setup/devices.sh@59 -- # local pci status 00:03:15.914 15:45:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.914 15:45:55 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:15.914 15:45:55 -- setup/devices.sh@47 -- # setup output config 00:03:15.914 15:45:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.914 15:45:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:18.450 15:45:58 -- setup/devices.sh@63 -- # found=1 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.450 15:45:58 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:18.450 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.710 15:45:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:18.710 15:45:58 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:18.710 15:45:58 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.710 15:45:58 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:18.710 15:45:58 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.710 15:45:58 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:18.710 15:45:58 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.710 15:45:58 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.710 15:45:58 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.710 15:45:58 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:18.710 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:18.710 15:45:58 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.710 15:45:58 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:18.969 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:18.970 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:18.970 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:18.970 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:18.970 15:45:58 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:18.970 15:45:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:18.970 15:45:58 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.970 15:45:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:18.970 15:45:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:18.970 15:45:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.970 15:45:58 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.970 15:45:58 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:18.970 15:45:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:18.970 15:45:58 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.970 15:45:58 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:18.970 15:45:58 -- setup/devices.sh@53 -- # local found=0 00:03:18.970 15:45:58 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:18.970 15:45:58 -- setup/devices.sh@56 -- # : 00:03:18.970 15:45:58 -- setup/devices.sh@59 -- # local pci status 00:03:18.970 15:45:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.970 15:45:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:18.970 15:45:58 -- setup/devices.sh@47 -- # setup output config 00:03:18.970 15:45:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.970 15:45:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:21.509 15:46:01 -- setup/devices.sh@63 -- # found=1 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.509 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.509 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.510 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.510 15:46:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:21.510 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.768 15:46:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.768 15:46:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:21.768 15:46:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.769 15:46:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:21.769 15:46:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:21.769 15:46:01 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.769 15:46:01 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:21.769 15:46:01 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:21.769 15:46:01 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:21.769 15:46:01 -- setup/devices.sh@50 -- # local mount_point= 00:03:21.769 15:46:01 -- setup/devices.sh@51 -- # local test_file= 00:03:21.769 15:46:01 -- setup/devices.sh@53 -- # local found=0 00:03:21.769 15:46:01 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:21.769 15:46:01 -- setup/devices.sh@59 -- # local pci status 00:03:21.769 15:46:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.769 15:46:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:21.769 15:46:01 -- setup/devices.sh@47 -- # setup output config 00:03:21.769 15:46:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.769 15:46:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:24.305 15:46:03 -- setup/devices.sh@63 -- # found=1 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.305 15:46:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:24.305 15:46:03 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:24.305 15:46:03 -- setup/devices.sh@68 -- # return 0 00:03:24.305 15:46:03 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:24.305 15:46:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.305 15:46:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:24.305 15:46:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:24.305 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:24.305 00:03:24.305 real 0m10.555s 00:03:24.305 user 0m3.063s 00:03:24.305 sys 0m5.274s 00:03:24.305 15:46:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.305 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:03:24.305 ************************************ 00:03:24.305 END TEST nvme_mount 00:03:24.305 ************************************ 00:03:24.305 15:46:03 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:24.305 15:46:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.305 15:46:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.305 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:03:24.563 ************************************ 00:03:24.563 START TEST dm_mount 00:03:24.564 ************************************ 00:03:24.564 15:46:04 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:24.564 15:46:04 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:24.564 15:46:04 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:24.564 15:46:04 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:24.564 15:46:04 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:24.564 15:46:04 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:24.564 15:46:04 -- setup/common.sh@40 -- # local part_no=2 00:03:24.564 15:46:04 -- setup/common.sh@41 -- # local size=1073741824 00:03:24.564 15:46:04 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:24.564 15:46:04 -- setup/common.sh@44 -- # parts=() 00:03:24.564 15:46:04 -- setup/common.sh@44 -- # local parts 00:03:24.564 15:46:04 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:24.564 15:46:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.564 15:46:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:24.564 15:46:04 -- setup/common.sh@46 -- # (( part++ )) 00:03:24.564 15:46:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.564 15:46:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:24.564 15:46:04 -- setup/common.sh@46 -- # (( part++ )) 00:03:24.564 15:46:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:24.564 15:46:04 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:24.564 15:46:04 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:24.564 15:46:04 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:25.501 Creating new GPT entries in memory. 00:03:25.501 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:25.501 other utilities. 00:03:25.501 15:46:05 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:25.501 15:46:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:25.501 15:46:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:25.501 15:46:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:25.501 15:46:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:26.881 Creating new GPT entries in memory. 00:03:26.881 The operation has completed successfully. 00:03:26.881 15:46:06 -- setup/common.sh@57 -- # (( part++ )) 00:03:26.881 15:46:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:26.881 15:46:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:26.881 15:46:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:26.881 15:46:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:27.820 The operation has completed successfully. 00:03:27.820 15:46:07 -- setup/common.sh@57 -- # (( part++ )) 00:03:27.820 15:46:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:27.820 15:46:07 -- setup/common.sh@62 -- # wait 2242885 00:03:27.820 15:46:07 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:27.820 15:46:07 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.820 15:46:07 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:27.820 15:46:07 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:27.820 15:46:07 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:27.820 15:46:07 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:27.820 15:46:07 -- setup/devices.sh@161 -- # break 00:03:27.820 15:46:07 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:27.820 15:46:07 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:27.820 15:46:07 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:27.820 15:46:07 -- setup/devices.sh@166 -- # dm=dm-2 00:03:27.820 15:46:07 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:27.820 15:46:07 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:27.820 15:46:07 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.820 15:46:07 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:27.820 15:46:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.820 15:46:07 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:27.820 15:46:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:27.820 15:46:07 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.820 15:46:07 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:27.820 15:46:07 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:27.820 15:46:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:27.820 15:46:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.820 15:46:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:27.820 15:46:07 -- setup/devices.sh@53 -- # local found=0 00:03:27.820 15:46:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:27.820 15:46:07 -- setup/devices.sh@56 -- # : 00:03:27.820 15:46:07 -- setup/devices.sh@59 -- # local pci status 00:03:27.820 15:46:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.820 15:46:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:27.820 15:46:07 -- setup/devices.sh@47 -- # setup output config 00:03:27.820 15:46:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.820 15:46:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:30.357 15:46:09 -- setup/devices.sh@63 -- # found=1 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.357 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.357 15:46:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:30.357 15:46:09 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:30.358 15:46:09 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.358 15:46:09 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:30.358 15:46:09 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:30.358 15:46:09 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.358 15:46:09 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:30.358 15:46:09 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:30.358 15:46:09 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:30.358 15:46:09 -- setup/devices.sh@50 -- # local mount_point= 00:03:30.358 15:46:09 -- setup/devices.sh@51 -- # local test_file= 00:03:30.358 15:46:09 -- setup/devices.sh@53 -- # local found=0 00:03:30.358 15:46:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:30.358 15:46:09 -- setup/devices.sh@59 -- # local pci status 00:03:30.358 15:46:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.358 15:46:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:30.358 15:46:09 -- setup/devices.sh@47 -- # setup output config 00:03:30.358 15:46:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.358 15:46:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:32.965 15:46:12 -- setup/devices.sh@63 -- # found=1 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.965 15:46:12 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.965 15:46:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.966 15:46:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:32.966 15:46:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:32.966 15:46:12 -- setup/devices.sh@68 -- # return 0 00:03:32.966 15:46:12 -- setup/devices.sh@187 -- # cleanup_dm 00:03:32.966 15:46:12 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:32.966 15:46:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:32.966 15:46:12 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:32.966 15:46:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:32.966 15:46:12 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:32.966 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:32.966 15:46:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:32.966 15:46:12 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:32.966 00:03:32.966 real 0m8.543s 00:03:32.966 user 0m1.958s 00:03:32.966 sys 0m3.573s 00:03:32.966 15:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.966 15:46:12 -- common/autotest_common.sh@10 -- # set +x 00:03:32.966 ************************************ 00:03:32.966 END TEST dm_mount 00:03:32.966 ************************************ 00:03:33.225 15:46:12 -- setup/devices.sh@1 -- # cleanup 00:03:33.225 15:46:12 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:33.225 15:46:12 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.225 15:46:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.225 15:46:12 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:33.225 15:46:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.225 15:46:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.484 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:33.484 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:33.484 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:33.484 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:33.484 15:46:12 -- setup/devices.sh@12 -- # cleanup_dm 00:03:33.484 15:46:12 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.484 15:46:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:33.484 15:46:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.484 15:46:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:33.484 15:46:12 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.484 15:46:12 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:33.484 00:03:33.484 real 0m22.787s 00:03:33.484 user 0m6.282s 00:03:33.484 sys 0m11.103s 00:03:33.484 15:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:33.484 15:46:12 -- common/autotest_common.sh@10 -- # set +x 00:03:33.484 ************************************ 00:03:33.484 END TEST devices 00:03:33.484 ************************************ 00:03:33.484 00:03:33.484 real 1m15.826s 00:03:33.484 user 0m24.581s 00:03:33.484 sys 0m41.408s 00:03:33.484 15:46:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:33.484 15:46:12 -- common/autotest_common.sh@10 -- # set +x 00:03:33.484 ************************************ 00:03:33.484 END TEST setup.sh 00:03:33.484 ************************************ 00:03:33.484 15:46:13 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:36.018 Hugepages 00:03:36.018 node hugesize free / total 00:03:36.018 node0 1048576kB 0 / 0 00:03:36.018 node0 2048kB 2048 / 2048 00:03:36.018 node1 1048576kB 0 / 0 00:03:36.018 node1 2048kB 0 / 0 00:03:36.018 00:03:36.019 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:36.019 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:36.019 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:36.019 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:36.019 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:36.019 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:36.019 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:36.277 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:36.277 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:36.277 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:36.277 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:36.277 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:36.277 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:36.277 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:36.277 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:36.277 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:36.278 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:36.278 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:36.278 15:46:15 -- spdk/autotest.sh@130 -- # uname -s 00:03:36.278 15:46:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:36.278 15:46:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:36.278 15:46:15 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.817 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:38.817 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:39.756 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:39.756 15:46:19 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:40.693 15:46:20 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:40.693 15:46:20 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:40.693 15:46:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:40.693 15:46:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:40.693 15:46:20 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:40.693 15:46:20 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:40.693 15:46:20 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.693 15:46:20 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:40.693 15:46:20 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:40.693 15:46:20 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:40.693 15:46:20 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:03:40.693 15:46:20 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.227 Waiting for block devices as requested 00:03:43.227 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:43.486 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:43.486 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:43.486 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:43.758 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:43.758 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:43.758 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:43.758 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:44.017 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:44.017 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:44.017 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:44.017 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:44.277 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:44.277 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:44.277 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:44.537 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:44.537 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:44.537 15:46:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:44.537 15:46:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:44.537 15:46:24 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:44.537 15:46:24 -- common/autotest_common.sh@1488 -- # grep 0000:5e:00.0/nvme/nvme 00:03:44.537 15:46:24 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:44.537 15:46:24 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:44.537 15:46:24 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:44.537 15:46:24 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:44.537 15:46:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:44.537 15:46:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:44.537 15:46:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:44.537 15:46:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:44.537 15:46:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:44.537 15:46:24 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:44.537 15:46:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:44.537 15:46:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:44.537 15:46:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:44.537 15:46:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:44.537 15:46:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:44.537 15:46:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:44.537 15:46:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:44.537 15:46:24 -- common/autotest_common.sh@1543 -- # continue 00:03:44.537 15:46:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:44.537 15:46:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:44.537 15:46:24 -- common/autotest_common.sh@10 -- # set +x 00:03:44.537 15:46:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:44.537 15:46:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:44.537 15:46:24 -- common/autotest_common.sh@10 -- # set +x 00:03:44.537 15:46:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.830 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:47.830 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:48.397 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.397 15:46:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:48.397 15:46:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:48.397 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:03:48.397 15:46:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:48.397 15:46:27 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:48.397 15:46:27 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:48.397 15:46:27 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:48.397 15:46:27 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:48.397 15:46:27 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:48.397 15:46:27 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:48.397 15:46:27 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:48.397 15:46:27 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.397 15:46:27 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:48.397 15:46:27 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:48.397 15:46:28 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:48.397 15:46:28 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:03:48.397 15:46:28 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:48.397 15:46:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:48.397 15:46:28 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:48.397 15:46:28 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:48.397 15:46:28 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:48.397 15:46:28 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:5e:00.0 00:03:48.397 15:46:28 -- common/autotest_common.sh@1578 -- # [[ -z 0000:5e:00.0 ]] 00:03:48.397 15:46:28 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=2251789 00:03:48.397 15:46:28 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.397 15:46:28 -- common/autotest_common.sh@1584 -- # waitforlisten 2251789 00:03:48.397 15:46:28 -- common/autotest_common.sh@817 -- # '[' -z 2251789 ']' 00:03:48.397 15:46:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.397 15:46:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:48.397 15:46:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.397 15:46:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:48.397 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:03:48.655 [2024-04-26 15:46:28.155896] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:03:48.656 [2024-04-26 15:46:28.155995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251789 ] 00:03:48.656 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.656 [2024-04-26 15:46:28.257676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.914 [2024-04-26 15:46:28.468425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.851 15:46:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:49.851 15:46:29 -- common/autotest_common.sh@850 -- # return 0 00:03:49.851 15:46:29 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:49.851 15:46:29 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:49.851 15:46:29 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:53.139 nvme0n1 00:03:53.139 15:46:32 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:53.139 [2024-04-26 15:46:32.602193] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:53.139 request: 00:03:53.139 { 00:03:53.139 "nvme_ctrlr_name": "nvme0", 00:03:53.139 "password": "test", 00:03:53.139 "method": "bdev_nvme_opal_revert", 00:03:53.139 "req_id": 1 00:03:53.139 } 00:03:53.139 Got JSON-RPC error response 00:03:53.139 response: 00:03:53.139 { 00:03:53.139 "code": -32602, 00:03:53.139 "message": "Invalid parameters" 00:03:53.139 } 00:03:53.139 15:46:32 -- common/autotest_common.sh@1590 -- # true 00:03:53.139 15:46:32 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:53.139 15:46:32 -- common/autotest_common.sh@1594 -- # killprocess 2251789 00:03:53.139 15:46:32 -- common/autotest_common.sh@936 -- # '[' -z 2251789 ']' 00:03:53.139 15:46:32 -- common/autotest_common.sh@940 -- # kill -0 2251789 00:03:53.139 15:46:32 -- common/autotest_common.sh@941 -- # uname 00:03:53.139 15:46:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:53.139 15:46:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2251789 00:03:53.139 15:46:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:53.139 15:46:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:53.139 15:46:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2251789' 00:03:53.139 killing process with pid 2251789 00:03:53.139 15:46:32 -- common/autotest_common.sh@955 -- # kill 2251789 00:03:53.139 15:46:32 -- common/autotest_common.sh@960 -- # wait 2251789 00:03:57.354 15:46:36 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:57.354 15:46:36 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:57.354 15:46:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:57.354 15:46:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:57.354 15:46:36 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:57.354 15:46:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:57.354 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:03:57.354 15:46:36 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:57.354 15:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.354 15:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.354 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:03:57.354 ************************************ 00:03:57.354 START TEST env 00:03:57.354 ************************************ 00:03:57.354 15:46:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:57.354 * Looking for test storage... 00:03:57.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:57.354 15:46:36 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:57.354 15:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.354 15:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.354 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:03:57.354 ************************************ 00:03:57.354 START TEST env_memory 00:03:57.354 ************************************ 00:03:57.354 15:46:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:57.354 00:03:57.354 00:03:57.354 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.354 http://cunit.sourceforge.net/ 00:03:57.354 00:03:57.354 00:03:57.354 Suite: memory 00:03:57.354 Test: alloc and free memory map ...[2024-04-26 15:46:36.563193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:57.354 passed 00:03:57.354 Test: mem map translation ...[2024-04-26 15:46:36.602572] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:57.354 [2024-04-26 15:46:36.602594] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:57.354 [2024-04-26 15:46:36.602644] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:57.354 [2024-04-26 15:46:36.602657] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:57.354 passed 00:03:57.354 Test: mem map registration ...[2024-04-26 15:46:36.662452] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:57.354 [2024-04-26 15:46:36.662474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:57.354 passed 00:03:57.354 Test: mem map adjacent registrations ...passed 00:03:57.354 00:03:57.354 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.354 suites 1 1 n/a 0 0 00:03:57.354 tests 4 4 4 0 0 00:03:57.354 asserts 152 152 152 0 n/a 00:03:57.354 00:03:57.354 Elapsed time = 0.219 seconds 00:03:57.354 00:03:57.354 real 0m0.249s 00:03:57.354 user 0m0.228s 00:03:57.354 sys 0m0.020s 00:03:57.354 15:46:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:57.354 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:03:57.354 ************************************ 00:03:57.354 END TEST env_memory 00:03:57.354 ************************************ 00:03:57.354 15:46:36 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:57.354 15:46:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.354 15:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.354 15:46:36 -- common/autotest_common.sh@10 -- # set +x 00:03:57.354 ************************************ 00:03:57.354 START TEST env_vtophys 00:03:57.354 ************************************ 00:03:57.354 15:46:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:57.354 EAL: lib.eal log level changed from notice to debug 00:03:57.354 EAL: Detected lcore 0 as core 0 on socket 0 00:03:57.354 EAL: Detected lcore 1 as core 1 on socket 0 00:03:57.354 EAL: Detected lcore 2 as core 2 on socket 0 00:03:57.354 EAL: Detected lcore 3 as core 3 on socket 0 00:03:57.354 EAL: Detected lcore 4 as core 4 on socket 0 00:03:57.354 EAL: Detected lcore 5 as core 5 on socket 0 00:03:57.354 EAL: Detected lcore 6 as core 6 on socket 0 00:03:57.354 EAL: Detected lcore 7 as core 8 on socket 0 00:03:57.354 EAL: Detected lcore 8 as core 9 on socket 0 00:03:57.354 EAL: Detected lcore 9 as core 10 on socket 0 00:03:57.354 EAL: Detected lcore 10 as core 11 on socket 0 00:03:57.354 EAL: Detected lcore 11 as core 12 on socket 0 00:03:57.354 EAL: Detected lcore 12 as core 13 on socket 0 00:03:57.354 EAL: Detected lcore 13 as core 16 on socket 0 00:03:57.354 EAL: Detected lcore 14 as core 17 on socket 0 00:03:57.354 EAL: Detected lcore 15 as core 18 on socket 0 00:03:57.354 EAL: Detected lcore 16 as core 19 on socket 0 00:03:57.354 EAL: Detected lcore 17 as core 20 on socket 0 00:03:57.354 EAL: Detected lcore 18 as core 21 on socket 0 00:03:57.354 EAL: Detected lcore 19 as core 25 on socket 0 00:03:57.354 EAL: Detected lcore 20 as core 26 on socket 0 00:03:57.354 EAL: Detected lcore 21 as core 27 on socket 0 00:03:57.354 EAL: Detected lcore 22 as core 28 on socket 0 00:03:57.354 EAL: Detected lcore 23 as core 29 on socket 0 00:03:57.354 EAL: Detected lcore 24 as core 0 on socket 1 00:03:57.354 EAL: Detected lcore 25 as core 1 on socket 1 00:03:57.354 EAL: Detected lcore 26 as core 2 on socket 1 00:03:57.354 EAL: Detected lcore 27 as core 3 on socket 1 00:03:57.354 EAL: Detected lcore 28 as core 4 on socket 1 00:03:57.354 EAL: Detected lcore 29 as core 5 on socket 1 00:03:57.354 EAL: Detected lcore 30 as core 6 on socket 1 00:03:57.354 EAL: Detected lcore 31 as core 9 on socket 1 00:03:57.354 EAL: Detected lcore 32 as core 10 on socket 1 00:03:57.354 EAL: Detected lcore 33 as core 11 on socket 1 00:03:57.354 EAL: Detected lcore 34 as core 12 on socket 1 00:03:57.354 EAL: Detected lcore 35 as core 13 on socket 1 00:03:57.354 EAL: Detected lcore 36 as core 16 on socket 1 00:03:57.354 EAL: Detected lcore 37 as core 17 on socket 1 00:03:57.354 EAL: Detected lcore 38 as core 18 on socket 1 00:03:57.354 EAL: Detected lcore 39 as core 19 on socket 1 00:03:57.354 EAL: Detected lcore 40 as core 20 on socket 1 00:03:57.354 EAL: Detected lcore 41 as core 21 on socket 1 00:03:57.354 EAL: Detected lcore 42 as core 24 on socket 1 00:03:57.354 EAL: Detected lcore 43 as core 25 on socket 1 00:03:57.354 EAL: Detected lcore 44 as core 26 on socket 1 00:03:57.354 EAL: Detected lcore 45 as core 27 on socket 1 00:03:57.354 EAL: Detected lcore 46 as core 28 on socket 1 00:03:57.354 EAL: Detected lcore 47 as core 29 on socket 1 00:03:57.354 EAL: Detected lcore 48 as core 0 on socket 0 00:03:57.354 EAL: Detected lcore 49 as core 1 on socket 0 00:03:57.354 EAL: Detected lcore 50 as core 2 on socket 0 00:03:57.354 EAL: Detected lcore 51 as core 3 on socket 0 00:03:57.354 EAL: Detected lcore 52 as core 4 on socket 0 00:03:57.354 EAL: Detected lcore 53 as core 5 on socket 0 00:03:57.354 EAL: Detected lcore 54 as core 6 on socket 0 00:03:57.354 EAL: Detected lcore 55 as core 8 on socket 0 00:03:57.354 EAL: Detected lcore 56 as core 9 on socket 0 00:03:57.354 EAL: Detected lcore 57 as core 10 on socket 0 00:03:57.354 EAL: Detected lcore 58 as core 11 on socket 0 00:03:57.354 EAL: Detected lcore 59 as core 12 on socket 0 00:03:57.354 EAL: Detected lcore 60 as core 13 on socket 0 00:03:57.354 EAL: Detected lcore 61 as core 16 on socket 0 00:03:57.354 EAL: Detected lcore 62 as core 17 on socket 0 00:03:57.354 EAL: Detected lcore 63 as core 18 on socket 0 00:03:57.354 EAL: Detected lcore 64 as core 19 on socket 0 00:03:57.354 EAL: Detected lcore 65 as core 20 on socket 0 00:03:57.354 EAL: Detected lcore 66 as core 21 on socket 0 00:03:57.354 EAL: Detected lcore 67 as core 25 on socket 0 00:03:57.354 EAL: Detected lcore 68 as core 26 on socket 0 00:03:57.354 EAL: Detected lcore 69 as core 27 on socket 0 00:03:57.354 EAL: Detected lcore 70 as core 28 on socket 0 00:03:57.354 EAL: Detected lcore 71 as core 29 on socket 0 00:03:57.354 EAL: Detected lcore 72 as core 0 on socket 1 00:03:57.354 EAL: Detected lcore 73 as core 1 on socket 1 00:03:57.354 EAL: Detected lcore 74 as core 2 on socket 1 00:03:57.354 EAL: Detected lcore 75 as core 3 on socket 1 00:03:57.354 EAL: Detected lcore 76 as core 4 on socket 1 00:03:57.354 EAL: Detected lcore 77 as core 5 on socket 1 00:03:57.354 EAL: Detected lcore 78 as core 6 on socket 1 00:03:57.354 EAL: Detected lcore 79 as core 9 on socket 1 00:03:57.354 EAL: Detected lcore 80 as core 10 on socket 1 00:03:57.354 EAL: Detected lcore 81 as core 11 on socket 1 00:03:57.355 EAL: Detected lcore 82 as core 12 on socket 1 00:03:57.355 EAL: Detected lcore 83 as core 13 on socket 1 00:03:57.355 EAL: Detected lcore 84 as core 16 on socket 1 00:03:57.355 EAL: Detected lcore 85 as core 17 on socket 1 00:03:57.355 EAL: Detected lcore 86 as core 18 on socket 1 00:03:57.355 EAL: Detected lcore 87 as core 19 on socket 1 00:03:57.355 EAL: Detected lcore 88 as core 20 on socket 1 00:03:57.355 EAL: Detected lcore 89 as core 21 on socket 1 00:03:57.355 EAL: Detected lcore 90 as core 24 on socket 1 00:03:57.355 EAL: Detected lcore 91 as core 25 on socket 1 00:03:57.355 EAL: Detected lcore 92 as core 26 on socket 1 00:03:57.355 EAL: Detected lcore 93 as core 27 on socket 1 00:03:57.355 EAL: Detected lcore 94 as core 28 on socket 1 00:03:57.355 EAL: Detected lcore 95 as core 29 on socket 1 00:03:57.355 EAL: Maximum logical cores by configuration: 128 00:03:57.355 EAL: Detected CPU lcores: 96 00:03:57.355 EAL: Detected NUMA nodes: 2 00:03:57.355 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:57.355 EAL: Detected shared linkage of DPDK 00:03:57.355 EAL: No shared files mode enabled, IPC will be disabled 00:03:57.355 EAL: Bus pci wants IOVA as 'DC' 00:03:57.355 EAL: Buses did not request a specific IOVA mode. 00:03:57.355 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:57.355 EAL: Selected IOVA mode 'VA' 00:03:57.355 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.355 EAL: Probing VFIO support... 00:03:57.355 EAL: IOMMU type 1 (Type 1) is supported 00:03:57.355 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:57.355 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:57.355 EAL: VFIO support initialized 00:03:57.355 EAL: Ask a virtual area of 0x2e000 bytes 00:03:57.355 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:57.355 EAL: Setting up physically contiguous memory... 00:03:57.355 EAL: Setting maximum number of open files to 524288 00:03:57.355 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:57.355 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:57.355 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:57.355 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.355 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:57.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.355 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.355 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:57.355 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:57.355 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.355 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:57.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.355 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.355 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:57.355 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:57.355 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.355 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:57.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.355 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.355 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:57.355 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:57.355 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.355 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:57.355 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:57.355 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.355 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:57.355 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:57.355 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:57.355 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.355 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:57.355 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.355 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.355 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:57.355 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:57.355 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.355 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:57.355 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.355 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.355 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:57.355 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:57.355 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.355 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:57.355 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.355 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.355 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:57.355 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:57.355 EAL: Ask a virtual area of 0x61000 bytes 00:03:57.355 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:57.355 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:57.355 EAL: Ask a virtual area of 0x400000000 bytes 00:03:57.355 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:57.355 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:57.355 EAL: Hugepages will be freed exactly as allocated. 00:03:57.355 EAL: No shared files mode enabled, IPC is disabled 00:03:57.355 EAL: No shared files mode enabled, IPC is disabled 00:03:57.355 EAL: TSC frequency is ~2300000 KHz 00:03:57.355 EAL: Main lcore 0 is ready (tid=7ff648254a40;cpuset=[0]) 00:03:57.355 EAL: Trying to obtain current memory policy. 00:03:57.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.355 EAL: Restoring previous memory policy: 0 00:03:57.355 EAL: request: mp_malloc_sync 00:03:57.355 EAL: No shared files mode enabled, IPC is disabled 00:03:57.355 EAL: Heap on socket 0 was expanded by 2MB 00:03:57.355 EAL: No shared files mode enabled, IPC is disabled 00:03:57.355 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:57.355 EAL: Mem event callback 'spdk:(nil)' registered 00:03:57.613 00:03:57.613 00:03:57.613 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.613 http://cunit.sourceforge.net/ 00:03:57.613 00:03:57.613 00:03:57.613 Suite: components_suite 00:03:57.872 Test: vtophys_malloc_test ...passed 00:03:57.872 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:57.872 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.872 EAL: Restoring previous memory policy: 4 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was expanded by 4MB 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was shrunk by 4MB 00:03:57.872 EAL: Trying to obtain current memory policy. 00:03:57.872 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.872 EAL: Restoring previous memory policy: 4 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was expanded by 6MB 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was shrunk by 6MB 00:03:57.872 EAL: Trying to obtain current memory policy. 00:03:57.872 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.872 EAL: Restoring previous memory policy: 4 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was expanded by 10MB 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was shrunk by 10MB 00:03:57.872 EAL: Trying to obtain current memory policy. 00:03:57.872 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.872 EAL: Restoring previous memory policy: 4 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was expanded by 18MB 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was shrunk by 18MB 00:03:57.872 EAL: Trying to obtain current memory policy. 00:03:57.872 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.872 EAL: Restoring previous memory policy: 4 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was expanded by 34MB 00:03:57.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.872 EAL: request: mp_malloc_sync 00:03:57.872 EAL: No shared files mode enabled, IPC is disabled 00:03:57.872 EAL: Heap on socket 0 was shrunk by 34MB 00:03:58.132 EAL: Trying to obtain current memory policy. 00:03:58.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.132 EAL: Restoring previous memory policy: 4 00:03:58.132 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.132 EAL: request: mp_malloc_sync 00:03:58.132 EAL: No shared files mode enabled, IPC is disabled 00:03:58.132 EAL: Heap on socket 0 was expanded by 66MB 00:03:58.132 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.132 EAL: request: mp_malloc_sync 00:03:58.132 EAL: No shared files mode enabled, IPC is disabled 00:03:58.132 EAL: Heap on socket 0 was shrunk by 66MB 00:03:58.397 EAL: Trying to obtain current memory policy. 00:03:58.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.397 EAL: Restoring previous memory policy: 4 00:03:58.397 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.397 EAL: request: mp_malloc_sync 00:03:58.397 EAL: No shared files mode enabled, IPC is disabled 00:03:58.397 EAL: Heap on socket 0 was expanded by 130MB 00:03:58.662 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.662 EAL: request: mp_malloc_sync 00:03:58.662 EAL: No shared files mode enabled, IPC is disabled 00:03:58.662 EAL: Heap on socket 0 was shrunk by 130MB 00:03:58.662 EAL: Trying to obtain current memory policy. 00:03:58.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.921 EAL: Restoring previous memory policy: 4 00:03:58.921 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.921 EAL: request: mp_malloc_sync 00:03:58.921 EAL: No shared files mode enabled, IPC is disabled 00:03:58.921 EAL: Heap on socket 0 was expanded by 258MB 00:03:59.488 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.488 EAL: request: mp_malloc_sync 00:03:59.488 EAL: No shared files mode enabled, IPC is disabled 00:03:59.488 EAL: Heap on socket 0 was shrunk by 258MB 00:03:59.747 EAL: Trying to obtain current memory policy. 00:03:59.747 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.747 EAL: Restoring previous memory policy: 4 00:03:59.747 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.747 EAL: request: mp_malloc_sync 00:03:59.747 EAL: No shared files mode enabled, IPC is disabled 00:03:59.747 EAL: Heap on socket 0 was expanded by 514MB 00:04:01.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.125 EAL: request: mp_malloc_sync 00:04:01.125 EAL: No shared files mode enabled, IPC is disabled 00:04:01.125 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.694 EAL: Trying to obtain current memory policy. 00:04:01.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.952 EAL: Restoring previous memory policy: 4 00:04:01.952 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.952 EAL: request: mp_malloc_sync 00:04:01.952 EAL: No shared files mode enabled, IPC is disabled 00:04:01.952 EAL: Heap on socket 0 was expanded by 1026MB 00:04:04.487 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.487 EAL: request: mp_malloc_sync 00:04:04.487 EAL: No shared files mode enabled, IPC is disabled 00:04:04.487 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.864 passed 00:04:05.864 00:04:05.864 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.864 suites 1 1 n/a 0 0 00:04:05.864 tests 2 2 2 0 0 00:04:05.864 asserts 497 497 497 0 n/a 00:04:05.864 00:04:05.864 Elapsed time = 8.138 seconds 00:04:05.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.864 EAL: request: mp_malloc_sync 00:04:05.864 EAL: No shared files mode enabled, IPC is disabled 00:04:05.864 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.864 EAL: No shared files mode enabled, IPC is disabled 00:04:05.864 EAL: No shared files mode enabled, IPC is disabled 00:04:05.864 EAL: No shared files mode enabled, IPC is disabled 00:04:05.864 00:04:05.864 real 0m8.357s 00:04:05.864 user 0m7.574s 00:04:05.864 sys 0m0.732s 00:04:05.864 15:46:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.864 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:05.865 ************************************ 00:04:05.865 END TEST env_vtophys 00:04:05.865 ************************************ 00:04:05.865 15:46:45 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.865 15:46:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.865 15:46:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.865 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:05.865 ************************************ 00:04:05.865 START TEST env_pci 00:04:05.865 ************************************ 00:04:05.865 15:46:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:05.865 00:04:05.865 00:04:05.865 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.865 http://cunit.sourceforge.net/ 00:04:05.865 00:04:05.865 00:04:05.865 Suite: pci 00:04:05.865 Test: pci_hook ...[2024-04-26 15:46:45.429261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2254740 has claimed it 00:04:05.865 EAL: Cannot find device (10000:00:01.0) 00:04:05.865 EAL: Failed to attach device on primary process 00:04:05.865 passed 00:04:05.865 00:04:05.865 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.865 suites 1 1 n/a 0 0 00:04:05.865 tests 1 1 1 0 0 00:04:05.865 asserts 25 25 25 0 n/a 00:04:05.865 00:04:05.865 Elapsed time = 0.043 seconds 00:04:05.865 00:04:05.865 real 0m0.117s 00:04:05.865 user 0m0.055s 00:04:05.865 sys 0m0.062s 00:04:05.865 15:46:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.865 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:05.865 ************************************ 00:04:05.865 END TEST env_pci 00:04:05.865 ************************************ 00:04:05.865 15:46:45 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.865 15:46:45 -- env/env.sh@15 -- # uname 00:04:06.124 15:46:45 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:06.124 15:46:45 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:06.124 15:46:45 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.124 15:46:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:06.124 15:46:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.124 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:06.124 ************************************ 00:04:06.124 START TEST env_dpdk_post_init 00:04:06.124 ************************************ 00:04:06.124 15:46:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.124 EAL: Detected CPU lcores: 96 00:04:06.124 EAL: Detected NUMA nodes: 2 00:04:06.124 EAL: Detected shared linkage of DPDK 00:04:06.124 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.124 EAL: Selected IOVA mode 'VA' 00:04:06.124 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.124 EAL: VFIO support initialized 00:04:06.124 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.383 EAL: Using IOMMU type 1 (Type 1) 00:04:06.383 EAL: Ignore mapping IO port bar(1) 00:04:06.383 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:06.383 EAL: Ignore mapping IO port bar(1) 00:04:06.383 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:06.383 EAL: Ignore mapping IO port bar(1) 00:04:06.383 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:06.383 EAL: Ignore mapping IO port bar(1) 00:04:06.383 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:06.383 EAL: Ignore mapping IO port bar(1) 00:04:06.383 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:06.383 EAL: Ignore mapping IO port bar(1) 00:04:06.383 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:06.383 EAL: Ignore mapping IO port bar(1) 00:04:06.383 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:06.383 EAL: Ignore mapping IO port bar(1) 00:04:06.383 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:07.320 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:07.320 EAL: Ignore mapping IO port bar(1) 00:04:07.320 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:07.320 EAL: Ignore mapping IO port bar(1) 00:04:07.320 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:07.320 EAL: Ignore mapping IO port bar(1) 00:04:07.320 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:07.320 EAL: Ignore mapping IO port bar(1) 00:04:07.320 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:07.320 EAL: Ignore mapping IO port bar(1) 00:04:07.320 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:07.320 EAL: Ignore mapping IO port bar(1) 00:04:07.320 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:07.320 EAL: Ignore mapping IO port bar(1) 00:04:07.320 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:07.320 EAL: Ignore mapping IO port bar(1) 00:04:07.320 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:10.606 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:10.606 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:10.606 Starting DPDK initialization... 00:04:10.606 Starting SPDK post initialization... 00:04:10.606 SPDK NVMe probe 00:04:10.606 Attaching to 0000:5e:00.0 00:04:10.606 Attached to 0000:5e:00.0 00:04:10.606 Cleaning up... 00:04:10.606 00:04:10.606 real 0m4.460s 00:04:10.606 user 0m3.335s 00:04:10.606 sys 0m0.196s 00:04:10.606 15:46:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.606 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:10.606 ************************************ 00:04:10.606 END TEST env_dpdk_post_init 00:04:10.606 ************************************ 00:04:10.606 15:46:50 -- env/env.sh@26 -- # uname 00:04:10.606 15:46:50 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:10.606 15:46:50 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.606 15:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.606 15:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.606 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:10.606 ************************************ 00:04:10.606 START TEST env_mem_callbacks 00:04:10.606 ************************************ 00:04:10.606 15:46:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.864 EAL: Detected CPU lcores: 96 00:04:10.864 EAL: Detected NUMA nodes: 2 00:04:10.864 EAL: Detected shared linkage of DPDK 00:04:10.864 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.864 EAL: Selected IOVA mode 'VA' 00:04:10.864 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.864 EAL: VFIO support initialized 00:04:10.864 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.864 00:04:10.864 00:04:10.864 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.864 http://cunit.sourceforge.net/ 00:04:10.864 00:04:10.864 00:04:10.864 Suite: memory 00:04:10.864 Test: test ... 00:04:10.864 register 0x200000200000 2097152 00:04:10.864 malloc 3145728 00:04:10.864 register 0x200000400000 4194304 00:04:10.864 buf 0x2000004fffc0 len 3145728 PASSED 00:04:10.864 malloc 64 00:04:10.864 buf 0x2000004ffec0 len 64 PASSED 00:04:10.864 malloc 4194304 00:04:10.864 register 0x200000800000 6291456 00:04:10.864 buf 0x2000009fffc0 len 4194304 PASSED 00:04:10.865 free 0x2000004fffc0 3145728 00:04:10.865 free 0x2000004ffec0 64 00:04:10.865 unregister 0x200000400000 4194304 PASSED 00:04:10.865 free 0x2000009fffc0 4194304 00:04:10.865 unregister 0x200000800000 6291456 PASSED 00:04:10.865 malloc 8388608 00:04:10.865 register 0x200000400000 10485760 00:04:10.865 buf 0x2000005fffc0 len 8388608 PASSED 00:04:10.865 free 0x2000005fffc0 8388608 00:04:10.865 unregister 0x200000400000 10485760 PASSED 00:04:10.865 passed 00:04:10.865 00:04:10.865 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.865 suites 1 1 n/a 0 0 00:04:10.865 tests 1 1 1 0 0 00:04:10.865 asserts 15 15 15 0 n/a 00:04:10.865 00:04:10.865 Elapsed time = 0.072 seconds 00:04:10.865 00:04:10.865 real 0m0.174s 00:04:10.865 user 0m0.104s 00:04:10.865 sys 0m0.069s 00:04:10.865 15:46:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.865 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:10.865 ************************************ 00:04:10.865 END TEST env_mem_callbacks 00:04:10.865 ************************************ 00:04:10.865 00:04:10.865 real 0m14.191s 00:04:10.865 user 0m11.621s 00:04:10.865 sys 0m1.542s 00:04:10.865 15:46:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.865 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:10.865 ************************************ 00:04:10.865 END TEST env 00:04:10.865 ************************************ 00:04:10.865 15:46:50 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:10.865 15:46:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.865 15:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.865 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:11.123 ************************************ 00:04:11.123 START TEST rpc 00:04:11.123 ************************************ 00:04:11.123 15:46:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.123 * Looking for test storage... 00:04:11.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.123 15:46:50 -- rpc/rpc.sh@65 -- # spdk_pid=2255801 00:04:11.123 15:46:50 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.123 15:46:50 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:11.123 15:46:50 -- rpc/rpc.sh@67 -- # waitforlisten 2255801 00:04:11.123 15:46:50 -- common/autotest_common.sh@817 -- # '[' -z 2255801 ']' 00:04:11.124 15:46:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.124 15:46:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:11.124 15:46:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.124 15:46:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:11.124 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:11.124 [2024-04-26 15:46:50.781344] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:11.124 [2024-04-26 15:46:50.781428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255801 ] 00:04:11.382 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.382 [2024-04-26 15:46:50.885521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.640 [2024-04-26 15:46:51.105822] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:11.640 [2024-04-26 15:46:51.105864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2255801' to capture a snapshot of events at runtime. 00:04:11.640 [2024-04-26 15:46:51.105875] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:11.640 [2024-04-26 15:46:51.105884] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:11.640 [2024-04-26 15:46:51.105894] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2255801 for offline analysis/debug. 00:04:11.640 [2024-04-26 15:46:51.105921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.578 15:46:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:12.578 15:46:52 -- common/autotest_common.sh@850 -- # return 0 00:04:12.578 15:46:52 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.578 15:46:52 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.578 15:46:52 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:12.578 15:46:52 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:12.578 15:46:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.578 15:46:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.578 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.578 ************************************ 00:04:12.578 START TEST rpc_integrity 00:04:12.578 ************************************ 00:04:12.578 15:46:52 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:12.578 15:46:52 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.578 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.578 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.578 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.578 15:46:52 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.578 15:46:52 -- rpc/rpc.sh@13 -- # jq length 00:04:12.578 15:46:52 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.578 15:46:52 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.578 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.578 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.578 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.578 15:46:52 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:12.578 15:46:52 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.578 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.578 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.578 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.578 15:46:52 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.578 { 00:04:12.578 "name": "Malloc0", 00:04:12.578 "aliases": [ 00:04:12.578 "41ec31b5-dad6-4cc4-95ca-bb6b97b3b0b0" 00:04:12.578 ], 00:04:12.578 "product_name": "Malloc disk", 00:04:12.578 "block_size": 512, 00:04:12.578 "num_blocks": 16384, 00:04:12.578 "uuid": "41ec31b5-dad6-4cc4-95ca-bb6b97b3b0b0", 00:04:12.578 "assigned_rate_limits": { 00:04:12.578 "rw_ios_per_sec": 0, 00:04:12.578 "rw_mbytes_per_sec": 0, 00:04:12.578 "r_mbytes_per_sec": 0, 00:04:12.578 "w_mbytes_per_sec": 0 00:04:12.578 }, 00:04:12.578 "claimed": false, 00:04:12.578 "zoned": false, 00:04:12.578 "supported_io_types": { 00:04:12.578 "read": true, 00:04:12.578 "write": true, 00:04:12.578 "unmap": true, 00:04:12.578 "write_zeroes": true, 00:04:12.578 "flush": true, 00:04:12.578 "reset": true, 00:04:12.578 "compare": false, 00:04:12.578 "compare_and_write": false, 00:04:12.578 "abort": true, 00:04:12.578 "nvme_admin": false, 00:04:12.578 "nvme_io": false 00:04:12.578 }, 00:04:12.578 "memory_domains": [ 00:04:12.578 { 00:04:12.578 "dma_device_id": "system", 00:04:12.578 "dma_device_type": 1 00:04:12.578 }, 00:04:12.578 { 00:04:12.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.578 "dma_device_type": 2 00:04:12.578 } 00:04:12.578 ], 00:04:12.578 "driver_specific": {} 00:04:12.578 } 00:04:12.578 ]' 00:04:12.578 15:46:52 -- rpc/rpc.sh@17 -- # jq length 00:04:12.837 15:46:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.837 15:46:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:12.837 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.837 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.837 [2024-04-26 15:46:52.287573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:12.837 [2024-04-26 15:46:52.287628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.837 [2024-04-26 15:46:52.287652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022e80 00:04:12.837 [2024-04-26 15:46:52.287663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.837 [2024-04-26 15:46:52.289597] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.837 [2024-04-26 15:46:52.289624] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.837 Passthru0 00:04:12.837 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.837 15:46:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.837 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.837 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.837 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.837 15:46:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.837 { 00:04:12.837 "name": "Malloc0", 00:04:12.837 "aliases": [ 00:04:12.837 "41ec31b5-dad6-4cc4-95ca-bb6b97b3b0b0" 00:04:12.837 ], 00:04:12.837 "product_name": "Malloc disk", 00:04:12.837 "block_size": 512, 00:04:12.837 "num_blocks": 16384, 00:04:12.837 "uuid": "41ec31b5-dad6-4cc4-95ca-bb6b97b3b0b0", 00:04:12.837 "assigned_rate_limits": { 00:04:12.837 "rw_ios_per_sec": 0, 00:04:12.837 "rw_mbytes_per_sec": 0, 00:04:12.837 "r_mbytes_per_sec": 0, 00:04:12.837 "w_mbytes_per_sec": 0 00:04:12.837 }, 00:04:12.837 "claimed": true, 00:04:12.837 "claim_type": "exclusive_write", 00:04:12.837 "zoned": false, 00:04:12.837 "supported_io_types": { 00:04:12.837 "read": true, 00:04:12.837 "write": true, 00:04:12.837 "unmap": true, 00:04:12.837 "write_zeroes": true, 00:04:12.837 "flush": true, 00:04:12.837 "reset": true, 00:04:12.837 "compare": false, 00:04:12.837 "compare_and_write": false, 00:04:12.837 "abort": true, 00:04:12.837 "nvme_admin": false, 00:04:12.837 "nvme_io": false 00:04:12.837 }, 00:04:12.837 "memory_domains": [ 00:04:12.837 { 00:04:12.837 "dma_device_id": "system", 00:04:12.837 "dma_device_type": 1 00:04:12.837 }, 00:04:12.837 { 00:04:12.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.837 "dma_device_type": 2 00:04:12.837 } 00:04:12.837 ], 00:04:12.837 "driver_specific": {} 00:04:12.837 }, 00:04:12.837 { 00:04:12.837 "name": "Passthru0", 00:04:12.837 "aliases": [ 00:04:12.837 "31863471-2239-5a9a-97b7-3324a6e8a1bd" 00:04:12.837 ], 00:04:12.837 "product_name": "passthru", 00:04:12.837 "block_size": 512, 00:04:12.837 "num_blocks": 16384, 00:04:12.837 "uuid": "31863471-2239-5a9a-97b7-3324a6e8a1bd", 00:04:12.837 "assigned_rate_limits": { 00:04:12.837 "rw_ios_per_sec": 0, 00:04:12.837 "rw_mbytes_per_sec": 0, 00:04:12.837 "r_mbytes_per_sec": 0, 00:04:12.837 "w_mbytes_per_sec": 0 00:04:12.837 }, 00:04:12.837 "claimed": false, 00:04:12.837 "zoned": false, 00:04:12.837 "supported_io_types": { 00:04:12.837 "read": true, 00:04:12.837 "write": true, 00:04:12.837 "unmap": true, 00:04:12.837 "write_zeroes": true, 00:04:12.837 "flush": true, 00:04:12.837 "reset": true, 00:04:12.837 "compare": false, 00:04:12.837 "compare_and_write": false, 00:04:12.837 "abort": true, 00:04:12.837 "nvme_admin": false, 00:04:12.837 "nvme_io": false 00:04:12.837 }, 00:04:12.837 "memory_domains": [ 00:04:12.837 { 00:04:12.837 "dma_device_id": "system", 00:04:12.837 "dma_device_type": 1 00:04:12.837 }, 00:04:12.837 { 00:04:12.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.837 "dma_device_type": 2 00:04:12.837 } 00:04:12.837 ], 00:04:12.837 "driver_specific": { 00:04:12.837 "passthru": { 00:04:12.837 "name": "Passthru0", 00:04:12.837 "base_bdev_name": "Malloc0" 00:04:12.837 } 00:04:12.837 } 00:04:12.837 } 00:04:12.837 ]' 00:04:12.837 15:46:52 -- rpc/rpc.sh@21 -- # jq length 00:04:12.837 15:46:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.837 15:46:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.837 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.837 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.837 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.837 15:46:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:12.837 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.837 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.837 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.837 15:46:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.837 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:12.837 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.838 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:12.838 15:46:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.838 15:46:52 -- rpc/rpc.sh@26 -- # jq length 00:04:12.838 15:46:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.838 00:04:12.838 real 0m0.318s 00:04:12.838 user 0m0.179s 00:04:12.838 sys 0m0.035s 00:04:12.838 15:46:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.838 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.838 ************************************ 00:04:12.838 END TEST rpc_integrity 00:04:12.838 ************************************ 00:04:12.838 15:46:52 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:12.838 15:46:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:12.838 15:46:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.838 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:13.158 ************************************ 00:04:13.158 START TEST rpc_plugins 00:04:13.158 ************************************ 00:04:13.158 15:46:52 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:13.158 15:46:52 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.158 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.158 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:13.158 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.158 15:46:52 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:13.158 15:46:52 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:13.158 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.158 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:13.158 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.158 15:46:52 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:13.158 { 00:04:13.158 "name": "Malloc1", 00:04:13.158 "aliases": [ 00:04:13.158 "f475bea8-ae0e-488b-b667-13ed5c383b4c" 00:04:13.158 ], 00:04:13.158 "product_name": "Malloc disk", 00:04:13.158 "block_size": 4096, 00:04:13.158 "num_blocks": 256, 00:04:13.158 "uuid": "f475bea8-ae0e-488b-b667-13ed5c383b4c", 00:04:13.158 "assigned_rate_limits": { 00:04:13.158 "rw_ios_per_sec": 0, 00:04:13.158 "rw_mbytes_per_sec": 0, 00:04:13.158 "r_mbytes_per_sec": 0, 00:04:13.158 "w_mbytes_per_sec": 0 00:04:13.158 }, 00:04:13.158 "claimed": false, 00:04:13.158 "zoned": false, 00:04:13.158 "supported_io_types": { 00:04:13.158 "read": true, 00:04:13.158 "write": true, 00:04:13.158 "unmap": true, 00:04:13.158 "write_zeroes": true, 00:04:13.158 "flush": true, 00:04:13.158 "reset": true, 00:04:13.158 "compare": false, 00:04:13.158 "compare_and_write": false, 00:04:13.158 "abort": true, 00:04:13.158 "nvme_admin": false, 00:04:13.158 "nvme_io": false 00:04:13.158 }, 00:04:13.158 "memory_domains": [ 00:04:13.158 { 00:04:13.158 "dma_device_id": "system", 00:04:13.158 "dma_device_type": 1 00:04:13.158 }, 00:04:13.158 { 00:04:13.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.158 "dma_device_type": 2 00:04:13.158 } 00:04:13.158 ], 00:04:13.158 "driver_specific": {} 00:04:13.158 } 00:04:13.158 ]' 00:04:13.158 15:46:52 -- rpc/rpc.sh@32 -- # jq length 00:04:13.158 15:46:52 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:13.158 15:46:52 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:13.158 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.158 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:13.158 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.158 15:46:52 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:13.158 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.158 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:13.158 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.158 15:46:52 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.158 15:46:52 -- rpc/rpc.sh@36 -- # jq length 00:04:13.158 15:46:52 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.158 00:04:13.158 real 0m0.144s 00:04:13.158 user 0m0.084s 00:04:13.158 sys 0m0.020s 00:04:13.158 15:46:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:13.158 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:13.158 ************************************ 00:04:13.158 END TEST rpc_plugins 00:04:13.158 ************************************ 00:04:13.158 15:46:52 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.158 15:46:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.158 15:46:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.158 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:13.439 ************************************ 00:04:13.439 START TEST rpc_trace_cmd_test 00:04:13.439 ************************************ 00:04:13.439 15:46:52 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:13.439 15:46:52 -- rpc/rpc.sh@40 -- # local info 00:04:13.439 15:46:52 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:13.439 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.439 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:04:13.439 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.439 15:46:52 -- rpc/rpc.sh@42 -- # info='{ 00:04:13.439 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2255801", 00:04:13.439 "tpoint_group_mask": "0x8", 00:04:13.439 "iscsi_conn": { 00:04:13.439 "mask": "0x2", 00:04:13.439 "tpoint_mask": "0x0" 00:04:13.439 }, 00:04:13.439 "scsi": { 00:04:13.439 "mask": "0x4", 00:04:13.439 "tpoint_mask": "0x0" 00:04:13.439 }, 00:04:13.439 "bdev": { 00:04:13.439 "mask": "0x8", 00:04:13.439 "tpoint_mask": "0xffffffffffffffff" 00:04:13.439 }, 00:04:13.439 "nvmf_rdma": { 00:04:13.439 "mask": "0x10", 00:04:13.439 "tpoint_mask": "0x0" 00:04:13.439 }, 00:04:13.439 "nvmf_tcp": { 00:04:13.440 "mask": "0x20", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "ftl": { 00:04:13.440 "mask": "0x40", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "blobfs": { 00:04:13.440 "mask": "0x80", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "dsa": { 00:04:13.440 "mask": "0x200", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "thread": { 00:04:13.440 "mask": "0x400", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "nvme_pcie": { 00:04:13.440 "mask": "0x800", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "iaa": { 00:04:13.440 "mask": "0x1000", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "nvme_tcp": { 00:04:13.440 "mask": "0x2000", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "bdev_nvme": { 00:04:13.440 "mask": "0x4000", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 }, 00:04:13.440 "sock": { 00:04:13.440 "mask": "0x8000", 00:04:13.440 "tpoint_mask": "0x0" 00:04:13.440 } 00:04:13.440 }' 00:04:13.440 15:46:52 -- rpc/rpc.sh@43 -- # jq length 00:04:13.440 15:46:52 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:13.440 15:46:52 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:13.440 15:46:53 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:13.440 15:46:53 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:13.440 15:46:53 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:13.440 15:46:53 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:13.440 15:46:53 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:13.440 15:46:53 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:13.440 15:46:53 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:13.440 00:04:13.440 real 0m0.195s 00:04:13.440 user 0m0.165s 00:04:13.440 sys 0m0.023s 00:04:13.440 15:46:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:13.440 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.440 ************************************ 00:04:13.440 END TEST rpc_trace_cmd_test 00:04:13.440 ************************************ 00:04:13.699 15:46:53 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:13.699 15:46:53 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:13.699 15:46:53 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:13.699 15:46:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.699 15:46:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.699 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 ************************************ 00:04:13.699 START TEST rpc_daemon_integrity 00:04:13.699 ************************************ 00:04:13.699 15:46:53 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:13.699 15:46:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.699 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.699 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.699 15:46:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.699 15:46:53 -- rpc/rpc.sh@13 -- # jq length 00:04:13.699 15:46:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.699 15:46:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.699 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.699 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.699 15:46:53 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:13.699 15:46:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.699 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.699 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.699 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.699 15:46:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.699 { 00:04:13.699 "name": "Malloc2", 00:04:13.699 "aliases": [ 00:04:13.699 "8175ce51-c182-4a72-affc-18fa02b81b41" 00:04:13.699 ], 00:04:13.699 "product_name": "Malloc disk", 00:04:13.699 "block_size": 512, 00:04:13.699 "num_blocks": 16384, 00:04:13.699 "uuid": "8175ce51-c182-4a72-affc-18fa02b81b41", 00:04:13.699 "assigned_rate_limits": { 00:04:13.699 "rw_ios_per_sec": 0, 00:04:13.699 "rw_mbytes_per_sec": 0, 00:04:13.699 "r_mbytes_per_sec": 0, 00:04:13.699 "w_mbytes_per_sec": 0 00:04:13.699 }, 00:04:13.699 "claimed": false, 00:04:13.699 "zoned": false, 00:04:13.699 "supported_io_types": { 00:04:13.699 "read": true, 00:04:13.699 "write": true, 00:04:13.699 "unmap": true, 00:04:13.699 "write_zeroes": true, 00:04:13.699 "flush": true, 00:04:13.699 "reset": true, 00:04:13.699 "compare": false, 00:04:13.699 "compare_and_write": false, 00:04:13.699 "abort": true, 00:04:13.699 "nvme_admin": false, 00:04:13.699 "nvme_io": false 00:04:13.699 }, 00:04:13.699 "memory_domains": [ 00:04:13.699 { 00:04:13.699 "dma_device_id": "system", 00:04:13.699 "dma_device_type": 1 00:04:13.699 }, 00:04:13.699 { 00:04:13.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.700 "dma_device_type": 2 00:04:13.700 } 00:04:13.700 ], 00:04:13.700 "driver_specific": {} 00:04:13.700 } 00:04:13.700 ]' 00:04:13.700 15:46:53 -- rpc/rpc.sh@17 -- # jq length 00:04:13.959 15:46:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.959 15:46:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:13.959 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.959 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.959 [2024-04-26 15:46:53.416846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:13.959 [2024-04-26 15:46:53.416894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.959 [2024-04-26 15:46:53.416918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000024080 00:04:13.959 [2024-04-26 15:46:53.416928] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.959 [2024-04-26 15:46:53.418841] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.959 [2024-04-26 15:46:53.418869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.959 Passthru0 00:04:13.959 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.959 15:46:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.959 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.959 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.959 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.959 15:46:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.959 { 00:04:13.959 "name": "Malloc2", 00:04:13.959 "aliases": [ 00:04:13.959 "8175ce51-c182-4a72-affc-18fa02b81b41" 00:04:13.959 ], 00:04:13.959 "product_name": "Malloc disk", 00:04:13.959 "block_size": 512, 00:04:13.959 "num_blocks": 16384, 00:04:13.959 "uuid": "8175ce51-c182-4a72-affc-18fa02b81b41", 00:04:13.959 "assigned_rate_limits": { 00:04:13.959 "rw_ios_per_sec": 0, 00:04:13.959 "rw_mbytes_per_sec": 0, 00:04:13.959 "r_mbytes_per_sec": 0, 00:04:13.959 "w_mbytes_per_sec": 0 00:04:13.959 }, 00:04:13.959 "claimed": true, 00:04:13.959 "claim_type": "exclusive_write", 00:04:13.959 "zoned": false, 00:04:13.959 "supported_io_types": { 00:04:13.959 "read": true, 00:04:13.959 "write": true, 00:04:13.959 "unmap": true, 00:04:13.959 "write_zeroes": true, 00:04:13.959 "flush": true, 00:04:13.959 "reset": true, 00:04:13.959 "compare": false, 00:04:13.959 "compare_and_write": false, 00:04:13.959 "abort": true, 00:04:13.959 "nvme_admin": false, 00:04:13.959 "nvme_io": false 00:04:13.959 }, 00:04:13.959 "memory_domains": [ 00:04:13.959 { 00:04:13.959 "dma_device_id": "system", 00:04:13.959 "dma_device_type": 1 00:04:13.959 }, 00:04:13.959 { 00:04:13.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.959 "dma_device_type": 2 00:04:13.959 } 00:04:13.959 ], 00:04:13.959 "driver_specific": {} 00:04:13.959 }, 00:04:13.959 { 00:04:13.959 "name": "Passthru0", 00:04:13.959 "aliases": [ 00:04:13.959 "5490bd08-5e10-54d0-91d9-91aba2342c2f" 00:04:13.959 ], 00:04:13.959 "product_name": "passthru", 00:04:13.959 "block_size": 512, 00:04:13.959 "num_blocks": 16384, 00:04:13.959 "uuid": "5490bd08-5e10-54d0-91d9-91aba2342c2f", 00:04:13.959 "assigned_rate_limits": { 00:04:13.959 "rw_ios_per_sec": 0, 00:04:13.959 "rw_mbytes_per_sec": 0, 00:04:13.959 "r_mbytes_per_sec": 0, 00:04:13.959 "w_mbytes_per_sec": 0 00:04:13.959 }, 00:04:13.959 "claimed": false, 00:04:13.959 "zoned": false, 00:04:13.959 "supported_io_types": { 00:04:13.959 "read": true, 00:04:13.959 "write": true, 00:04:13.959 "unmap": true, 00:04:13.959 "write_zeroes": true, 00:04:13.959 "flush": true, 00:04:13.959 "reset": true, 00:04:13.959 "compare": false, 00:04:13.959 "compare_and_write": false, 00:04:13.959 "abort": true, 00:04:13.959 "nvme_admin": false, 00:04:13.959 "nvme_io": false 00:04:13.959 }, 00:04:13.959 "memory_domains": [ 00:04:13.959 { 00:04:13.959 "dma_device_id": "system", 00:04:13.959 "dma_device_type": 1 00:04:13.959 }, 00:04:13.959 { 00:04:13.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.959 "dma_device_type": 2 00:04:13.959 } 00:04:13.959 ], 00:04:13.959 "driver_specific": { 00:04:13.959 "passthru": { 00:04:13.959 "name": "Passthru0", 00:04:13.959 "base_bdev_name": "Malloc2" 00:04:13.959 } 00:04:13.959 } 00:04:13.959 } 00:04:13.959 ]' 00:04:13.959 15:46:53 -- rpc/rpc.sh@21 -- # jq length 00:04:13.959 15:46:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.959 15:46:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.959 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.959 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.959 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.959 15:46:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:13.959 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.959 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.959 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.959 15:46:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.959 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:13.959 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.959 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:13.959 15:46:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.959 15:46:53 -- rpc/rpc.sh@26 -- # jq length 00:04:13.959 15:46:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.959 00:04:13.959 real 0m0.299s 00:04:13.959 user 0m0.171s 00:04:13.959 sys 0m0.030s 00:04:13.959 15:46:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:13.959 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.959 ************************************ 00:04:13.959 END TEST rpc_daemon_integrity 00:04:13.959 ************************************ 00:04:13.959 15:46:53 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:13.959 15:46:53 -- rpc/rpc.sh@84 -- # killprocess 2255801 00:04:13.959 15:46:53 -- common/autotest_common.sh@936 -- # '[' -z 2255801 ']' 00:04:13.959 15:46:53 -- common/autotest_common.sh@940 -- # kill -0 2255801 00:04:13.959 15:46:53 -- common/autotest_common.sh@941 -- # uname 00:04:13.959 15:46:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:13.959 15:46:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2255801 00:04:14.219 15:46:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:14.219 15:46:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:14.219 15:46:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2255801' 00:04:14.219 killing process with pid 2255801 00:04:14.219 15:46:53 -- common/autotest_common.sh@955 -- # kill 2255801 00:04:14.219 15:46:53 -- common/autotest_common.sh@960 -- # wait 2255801 00:04:16.752 00:04:16.752 real 0m5.390s 00:04:16.752 user 0m6.120s 00:04:16.752 sys 0m0.913s 00:04:16.752 15:46:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.752 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:04:16.752 ************************************ 00:04:16.752 END TEST rpc 00:04:16.752 ************************************ 00:04:16.752 15:46:56 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:16.752 15:46:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.752 15:46:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.752 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:04:16.752 ************************************ 00:04:16.752 START TEST skip_rpc 00:04:16.752 ************************************ 00:04:16.752 15:46:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:16.752 * Looking for test storage... 00:04:16.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.752 15:46:56 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.752 15:46:56 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.752 15:46:56 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.752 15:46:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.752 15:46:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.752 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:04:16.752 ************************************ 00:04:16.752 START TEST skip_rpc 00:04:16.752 ************************************ 00:04:16.752 15:46:56 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:16.752 15:46:56 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2256948 00:04:16.752 15:46:56 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.752 15:46:56 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.752 15:46:56 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.010 [2024-04-26 15:46:56.455170] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:17.011 [2024-04-26 15:46:56.455242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2256948 ] 00:04:17.011 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.011 [2024-04-26 15:46:56.554757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.270 [2024-04-26 15:46:56.771627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.541 15:47:01 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:22.541 15:47:01 -- common/autotest_common.sh@638 -- # local es=0 00:04:22.541 15:47:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:22.541 15:47:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:22.541 15:47:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:22.541 15:47:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:22.541 15:47:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:22.541 15:47:01 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:22.541 15:47:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:22.541 15:47:01 -- common/autotest_common.sh@10 -- # set +x 00:04:22.541 15:47:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:22.541 15:47:01 -- common/autotest_common.sh@641 -- # es=1 00:04:22.541 15:47:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:22.541 15:47:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:22.541 15:47:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:22.541 15:47:01 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:22.541 15:47:01 -- rpc/skip_rpc.sh@23 -- # killprocess 2256948 00:04:22.541 15:47:01 -- common/autotest_common.sh@936 -- # '[' -z 2256948 ']' 00:04:22.541 15:47:01 -- common/autotest_common.sh@940 -- # kill -0 2256948 00:04:22.541 15:47:01 -- common/autotest_common.sh@941 -- # uname 00:04:22.541 15:47:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:22.541 15:47:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2256948 00:04:22.541 15:47:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:22.541 15:47:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:22.541 15:47:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2256948' 00:04:22.541 killing process with pid 2256948 00:04:22.541 15:47:01 -- common/autotest_common.sh@955 -- # kill 2256948 00:04:22.541 15:47:01 -- common/autotest_common.sh@960 -- # wait 2256948 00:04:24.450 00:04:24.450 real 0m7.379s 00:04:24.450 user 0m7.027s 00:04:24.450 sys 0m0.360s 00:04:24.450 15:47:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.450 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:24.450 ************************************ 00:04:24.450 END TEST skip_rpc 00:04:24.450 ************************************ 00:04:24.450 15:47:03 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:24.450 15:47:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.450 15:47:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.450 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:24.450 ************************************ 00:04:24.450 START TEST skip_rpc_with_json 00:04:24.450 ************************************ 00:04:24.450 15:47:03 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:24.450 15:47:03 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:24.450 15:47:03 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2258352 00:04:24.450 15:47:03 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.450 15:47:03 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2258352 00:04:24.450 15:47:03 -- common/autotest_common.sh@817 -- # '[' -z 2258352 ']' 00:04:24.450 15:47:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.450 15:47:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:24.450 15:47:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.450 15:47:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:24.450 15:47:03 -- common/autotest_common.sh@10 -- # set +x 00:04:24.450 15:47:03 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.450 [2024-04-26 15:47:03.979606] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:24.450 [2024-04-26 15:47:03.979690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2258352 ] 00:04:24.450 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.450 [2024-04-26 15:47:04.082325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.708 [2024-04-26 15:47:04.298814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.646 15:47:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:25.646 15:47:05 -- common/autotest_common.sh@850 -- # return 0 00:04:25.646 15:47:05 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:25.646 15:47:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.646 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:25.646 [2024-04-26 15:47:05.203065] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:25.646 request: 00:04:25.646 { 00:04:25.646 "trtype": "tcp", 00:04:25.646 "method": "nvmf_get_transports", 00:04:25.646 "req_id": 1 00:04:25.646 } 00:04:25.646 Got JSON-RPC error response 00:04:25.646 response: 00:04:25.646 { 00:04:25.646 "code": -19, 00:04:25.646 "message": "No such device" 00:04:25.646 } 00:04:25.646 15:47:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:25.646 15:47:05 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:25.646 15:47:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.646 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:25.646 [2024-04-26 15:47:05.211178] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.646 15:47:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.646 15:47:05 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:25.646 15:47:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.646 15:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:25.905 15:47:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:25.905 15:47:05 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.905 { 00:04:25.905 "subsystems": [ 00:04:25.905 { 00:04:25.905 "subsystem": "vfio_user_target", 00:04:25.905 "config": null 00:04:25.905 }, 00:04:25.905 { 00:04:25.905 "subsystem": "keyring", 00:04:25.905 "config": [] 00:04:25.905 }, 00:04:25.905 { 00:04:25.905 "subsystem": "iobuf", 00:04:25.905 "config": [ 00:04:25.905 { 00:04:25.905 "method": "iobuf_set_options", 00:04:25.905 "params": { 00:04:25.905 "small_pool_count": 8192, 00:04:25.905 "large_pool_count": 1024, 00:04:25.905 "small_bufsize": 8192, 00:04:25.905 "large_bufsize": 135168 00:04:25.905 } 00:04:25.905 } 00:04:25.905 ] 00:04:25.905 }, 00:04:25.905 { 00:04:25.905 "subsystem": "sock", 00:04:25.905 "config": [ 00:04:25.905 { 00:04:25.905 "method": "sock_impl_set_options", 00:04:25.905 "params": { 00:04:25.905 "impl_name": "posix", 00:04:25.905 "recv_buf_size": 2097152, 00:04:25.905 "send_buf_size": 2097152, 00:04:25.905 "enable_recv_pipe": true, 00:04:25.905 "enable_quickack": false, 00:04:25.905 "enable_placement_id": 0, 00:04:25.905 "enable_zerocopy_send_server": true, 00:04:25.905 "enable_zerocopy_send_client": false, 00:04:25.905 "zerocopy_threshold": 0, 00:04:25.905 "tls_version": 0, 00:04:25.905 "enable_ktls": false 00:04:25.905 } 00:04:25.905 }, 00:04:25.905 { 00:04:25.905 "method": "sock_impl_set_options", 00:04:25.905 "params": { 00:04:25.905 "impl_name": "ssl", 00:04:25.905 "recv_buf_size": 4096, 00:04:25.905 "send_buf_size": 4096, 00:04:25.905 "enable_recv_pipe": true, 00:04:25.905 "enable_quickack": false, 00:04:25.905 "enable_placement_id": 0, 00:04:25.905 "enable_zerocopy_send_server": true, 00:04:25.905 "enable_zerocopy_send_client": false, 00:04:25.905 "zerocopy_threshold": 0, 00:04:25.906 "tls_version": 0, 00:04:25.906 "enable_ktls": false 00:04:25.906 } 00:04:25.906 } 00:04:25.906 ] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "vmd", 00:04:25.906 "config": [] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "accel", 00:04:25.906 "config": [ 00:04:25.906 { 00:04:25.906 "method": "accel_set_options", 00:04:25.906 "params": { 00:04:25.906 "small_cache_size": 128, 00:04:25.906 "large_cache_size": 16, 00:04:25.906 "task_count": 2048, 00:04:25.906 "sequence_count": 2048, 00:04:25.906 "buf_count": 2048 00:04:25.906 } 00:04:25.906 } 00:04:25.906 ] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "bdev", 00:04:25.906 "config": [ 00:04:25.906 { 00:04:25.906 "method": "bdev_set_options", 00:04:25.906 "params": { 00:04:25.906 "bdev_io_pool_size": 65535, 00:04:25.906 "bdev_io_cache_size": 256, 00:04:25.906 "bdev_auto_examine": true, 00:04:25.906 "iobuf_small_cache_size": 128, 00:04:25.906 "iobuf_large_cache_size": 16 00:04:25.906 } 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "method": "bdev_raid_set_options", 00:04:25.906 "params": { 00:04:25.906 "process_window_size_kb": 1024 00:04:25.906 } 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "method": "bdev_iscsi_set_options", 00:04:25.906 "params": { 00:04:25.906 "timeout_sec": 30 00:04:25.906 } 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "method": "bdev_nvme_set_options", 00:04:25.906 "params": { 00:04:25.906 "action_on_timeout": "none", 00:04:25.906 "timeout_us": 0, 00:04:25.906 "timeout_admin_us": 0, 00:04:25.906 "keep_alive_timeout_ms": 10000, 00:04:25.906 "arbitration_burst": 0, 00:04:25.906 "low_priority_weight": 0, 00:04:25.906 "medium_priority_weight": 0, 00:04:25.906 "high_priority_weight": 0, 00:04:25.906 "nvme_adminq_poll_period_us": 10000, 00:04:25.906 "nvme_ioq_poll_period_us": 0, 00:04:25.906 "io_queue_requests": 0, 00:04:25.906 "delay_cmd_submit": true, 00:04:25.906 "transport_retry_count": 4, 00:04:25.906 "bdev_retry_count": 3, 00:04:25.906 "transport_ack_timeout": 0, 00:04:25.906 "ctrlr_loss_timeout_sec": 0, 00:04:25.906 "reconnect_delay_sec": 0, 00:04:25.906 "fast_io_fail_timeout_sec": 0, 00:04:25.906 "disable_auto_failback": false, 00:04:25.906 "generate_uuids": false, 00:04:25.906 "transport_tos": 0, 00:04:25.906 "nvme_error_stat": false, 00:04:25.906 "rdma_srq_size": 0, 00:04:25.906 "io_path_stat": false, 00:04:25.906 "allow_accel_sequence": false, 00:04:25.906 "rdma_max_cq_size": 0, 00:04:25.906 "rdma_cm_event_timeout_ms": 0, 00:04:25.906 "dhchap_digests": [ 00:04:25.906 "sha256", 00:04:25.906 "sha384", 00:04:25.906 "sha512" 00:04:25.906 ], 00:04:25.906 "dhchap_dhgroups": [ 00:04:25.906 "null", 00:04:25.906 "ffdhe2048", 00:04:25.906 "ffdhe3072", 00:04:25.906 "ffdhe4096", 00:04:25.906 "ffdhe6144", 00:04:25.906 "ffdhe8192" 00:04:25.906 ] 00:04:25.906 } 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "method": "bdev_nvme_set_hotplug", 00:04:25.906 "params": { 00:04:25.906 "period_us": 100000, 00:04:25.906 "enable": false 00:04:25.906 } 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "method": "bdev_wait_for_examine" 00:04:25.906 } 00:04:25.906 ] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "scsi", 00:04:25.906 "config": null 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "scheduler", 00:04:25.906 "config": [ 00:04:25.906 { 00:04:25.906 "method": "framework_set_scheduler", 00:04:25.906 "params": { 00:04:25.906 "name": "static" 00:04:25.906 } 00:04:25.906 } 00:04:25.906 ] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "vhost_scsi", 00:04:25.906 "config": [] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "vhost_blk", 00:04:25.906 "config": [] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "ublk", 00:04:25.906 "config": [] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "nbd", 00:04:25.906 "config": [] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "nvmf", 00:04:25.906 "config": [ 00:04:25.906 { 00:04:25.906 "method": "nvmf_set_config", 00:04:25.906 "params": { 00:04:25.906 "discovery_filter": "match_any", 00:04:25.906 "admin_cmd_passthru": { 00:04:25.906 "identify_ctrlr": false 00:04:25.906 } 00:04:25.906 } 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "method": "nvmf_set_max_subsystems", 00:04:25.906 "params": { 00:04:25.906 "max_subsystems": 1024 00:04:25.906 } 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "method": "nvmf_set_crdt", 00:04:25.906 "params": { 00:04:25.906 "crdt1": 0, 00:04:25.906 "crdt2": 0, 00:04:25.906 "crdt3": 0 00:04:25.906 } 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "method": "nvmf_create_transport", 00:04:25.906 "params": { 00:04:25.906 "trtype": "TCP", 00:04:25.906 "max_queue_depth": 128, 00:04:25.906 "max_io_qpairs_per_ctrlr": 127, 00:04:25.906 "in_capsule_data_size": 4096, 00:04:25.906 "max_io_size": 131072, 00:04:25.906 "io_unit_size": 131072, 00:04:25.906 "max_aq_depth": 128, 00:04:25.906 "num_shared_buffers": 511, 00:04:25.906 "buf_cache_size": 4294967295, 00:04:25.906 "dif_insert_or_strip": false, 00:04:25.906 "zcopy": false, 00:04:25.906 "c2h_success": true, 00:04:25.906 "sock_priority": 0, 00:04:25.906 "abort_timeout_sec": 1, 00:04:25.906 "ack_timeout": 0, 00:04:25.906 "data_wr_pool_size": 0 00:04:25.906 } 00:04:25.906 } 00:04:25.906 ] 00:04:25.906 }, 00:04:25.906 { 00:04:25.906 "subsystem": "iscsi", 00:04:25.906 "config": [ 00:04:25.906 { 00:04:25.906 "method": "iscsi_set_options", 00:04:25.906 "params": { 00:04:25.906 "node_base": "iqn.2016-06.io.spdk", 00:04:25.906 "max_sessions": 128, 00:04:25.906 "max_connections_per_session": 2, 00:04:25.906 "max_queue_depth": 64, 00:04:25.906 "default_time2wait": 2, 00:04:25.906 "default_time2retain": 20, 00:04:25.906 "first_burst_length": 8192, 00:04:25.906 "immediate_data": true, 00:04:25.906 "allow_duplicated_isid": false, 00:04:25.906 "error_recovery_level": 0, 00:04:25.906 "nop_timeout": 60, 00:04:25.906 "nop_in_interval": 30, 00:04:25.906 "disable_chap": false, 00:04:25.906 "require_chap": false, 00:04:25.906 "mutual_chap": false, 00:04:25.906 "chap_group": 0, 00:04:25.906 "max_large_datain_per_connection": 64, 00:04:25.906 "max_r2t_per_connection": 4, 00:04:25.906 "pdu_pool_size": 36864, 00:04:25.906 "immediate_data_pool_size": 16384, 00:04:25.906 "data_out_pool_size": 2048 00:04:25.906 } 00:04:25.906 } 00:04:25.906 ] 00:04:25.906 } 00:04:25.906 ] 00:04:25.906 } 00:04:25.906 15:47:05 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:25.906 15:47:05 -- rpc/skip_rpc.sh@40 -- # killprocess 2258352 00:04:25.906 15:47:05 -- common/autotest_common.sh@936 -- # '[' -z 2258352 ']' 00:04:25.906 15:47:05 -- common/autotest_common.sh@940 -- # kill -0 2258352 00:04:25.906 15:47:05 -- common/autotest_common.sh@941 -- # uname 00:04:25.906 15:47:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:25.906 15:47:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2258352 00:04:25.906 15:47:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:25.906 15:47:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:25.906 15:47:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2258352' 00:04:25.906 killing process with pid 2258352 00:04:25.906 15:47:05 -- common/autotest_common.sh@955 -- # kill 2258352 00:04:25.906 15:47:05 -- common/autotest_common.sh@960 -- # wait 2258352 00:04:28.441 15:47:07 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2258938 00:04:28.441 15:47:07 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:28.441 15:47:07 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:33.708 15:47:12 -- rpc/skip_rpc.sh@50 -- # killprocess 2258938 00:04:33.708 15:47:12 -- common/autotest_common.sh@936 -- # '[' -z 2258938 ']' 00:04:33.708 15:47:12 -- common/autotest_common.sh@940 -- # kill -0 2258938 00:04:33.708 15:47:12 -- common/autotest_common.sh@941 -- # uname 00:04:33.708 15:47:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:33.708 15:47:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2258938 00:04:33.708 15:47:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:33.708 15:47:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:33.708 15:47:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2258938' 00:04:33.708 killing process with pid 2258938 00:04:33.708 15:47:12 -- common/autotest_common.sh@955 -- # kill 2258938 00:04:33.708 15:47:12 -- common/autotest_common.sh@960 -- # wait 2258938 00:04:35.609 15:47:15 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:35.609 15:47:15 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:35.609 00:04:35.609 real 0m11.227s 00:04:35.609 user 0m10.795s 00:04:35.609 sys 0m0.803s 00:04:35.609 15:47:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.609 15:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:35.609 ************************************ 00:04:35.609 END TEST skip_rpc_with_json 00:04:35.609 ************************************ 00:04:35.609 15:47:15 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:35.609 15:47:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.609 15:47:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.609 15:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:35.609 ************************************ 00:04:35.609 START TEST skip_rpc_with_delay 00:04:35.609 ************************************ 00:04:35.609 15:47:15 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:35.609 15:47:15 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.609 15:47:15 -- common/autotest_common.sh@638 -- # local es=0 00:04:35.609 15:47:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.609 15:47:15 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.609 15:47:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:35.609 15:47:15 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.609 15:47:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:35.609 15:47:15 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.609 15:47:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:35.609 15:47:15 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.609 15:47:15 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:35.609 15:47:15 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.867 [2024-04-26 15:47:15.361705] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:35.867 [2024-04-26 15:47:15.361792] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:35.867 15:47:15 -- common/autotest_common.sh@641 -- # es=1 00:04:35.867 15:47:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:35.868 15:47:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:35.868 15:47:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:35.868 00:04:35.868 real 0m0.137s 00:04:35.868 user 0m0.068s 00:04:35.868 sys 0m0.067s 00:04:35.868 15:47:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.868 15:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:35.868 ************************************ 00:04:35.868 END TEST skip_rpc_with_delay 00:04:35.868 ************************************ 00:04:35.868 15:47:15 -- rpc/skip_rpc.sh@77 -- # uname 00:04:35.868 15:47:15 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:35.868 15:47:15 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:35.868 15:47:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:35.868 15:47:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:35.868 15:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.126 ************************************ 00:04:36.126 START TEST exit_on_failed_rpc_init 00:04:36.126 ************************************ 00:04:36.126 15:47:15 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:36.126 15:47:15 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2260280 00:04:36.126 15:47:15 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2260280 00:04:36.126 15:47:15 -- common/autotest_common.sh@817 -- # '[' -z 2260280 ']' 00:04:36.126 15:47:15 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.126 15:47:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.126 15:47:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.126 15:47:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.126 15:47:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.126 15:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.126 [2024-04-26 15:47:15.631022] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:36.126 [2024-04-26 15:47:15.631116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260280 ] 00:04:36.126 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.126 [2024-04-26 15:47:15.734935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.385 [2024-04-26 15:47:15.963294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.319 15:47:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.319 15:47:16 -- common/autotest_common.sh@850 -- # return 0 00:04:37.319 15:47:16 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.319 15:47:16 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.319 15:47:16 -- common/autotest_common.sh@638 -- # local es=0 00:04:37.319 15:47:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.319 15:47:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.320 15:47:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.320 15:47:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.320 15:47:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.320 15:47:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.320 15:47:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.320 15:47:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.320 15:47:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:37.320 15:47:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.320 [2024-04-26 15:47:16.953847] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:37.320 [2024-04-26 15:47:16.953933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260516 ] 00:04:37.578 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.578 [2024-04-26 15:47:17.054445] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.836 [2024-04-26 15:47:17.281332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.836 [2024-04-26 15:47:17.281411] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:37.836 [2024-04-26 15:47:17.281427] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:37.836 [2024-04-26 15:47:17.281438] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:38.094 15:47:17 -- common/autotest_common.sh@641 -- # es=234 00:04:38.094 15:47:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:38.094 15:47:17 -- common/autotest_common.sh@650 -- # es=106 00:04:38.094 15:47:17 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:38.094 15:47:17 -- common/autotest_common.sh@658 -- # es=1 00:04:38.094 15:47:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:38.094 15:47:17 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:38.094 15:47:17 -- rpc/skip_rpc.sh@70 -- # killprocess 2260280 00:04:38.094 15:47:17 -- common/autotest_common.sh@936 -- # '[' -z 2260280 ']' 00:04:38.094 15:47:17 -- common/autotest_common.sh@940 -- # kill -0 2260280 00:04:38.094 15:47:17 -- common/autotest_common.sh@941 -- # uname 00:04:38.094 15:47:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:38.094 15:47:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2260280 00:04:38.094 15:47:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:38.094 15:47:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:38.094 15:47:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2260280' 00:04:38.094 killing process with pid 2260280 00:04:38.094 15:47:17 -- common/autotest_common.sh@955 -- # kill 2260280 00:04:38.094 15:47:17 -- common/autotest_common.sh@960 -- # wait 2260280 00:04:40.625 00:04:40.625 real 0m4.556s 00:04:40.625 user 0m5.184s 00:04:40.625 sys 0m0.579s 00:04:40.625 15:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.625 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:40.625 ************************************ 00:04:40.625 END TEST exit_on_failed_rpc_init 00:04:40.625 ************************************ 00:04:40.625 15:47:20 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:40.625 00:04:40.625 real 0m23.985s 00:04:40.626 user 0m23.343s 00:04:40.626 sys 0m2.191s 00:04:40.626 15:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.626 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:40.626 ************************************ 00:04:40.626 END TEST skip_rpc 00:04:40.626 ************************************ 00:04:40.626 15:47:20 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.626 15:47:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.626 15:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.626 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:40.884 ************************************ 00:04:40.884 START TEST rpc_client 00:04:40.884 ************************************ 00:04:40.884 15:47:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:40.884 * Looking for test storage... 00:04:40.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:40.884 15:47:20 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:40.884 OK 00:04:40.884 15:47:20 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.884 00:04:40.884 real 0m0.150s 00:04:40.884 user 0m0.069s 00:04:40.884 sys 0m0.089s 00:04:40.884 15:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:40.884 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:40.884 ************************************ 00:04:40.884 END TEST rpc_client 00:04:40.884 ************************************ 00:04:40.884 15:47:20 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:40.884 15:47:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:40.884 15:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:40.884 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:41.143 ************************************ 00:04:41.143 START TEST json_config 00:04:41.143 ************************************ 00:04:41.143 15:47:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:41.143 15:47:20 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.143 15:47:20 -- nvmf/common.sh@7 -- # uname -s 00:04:41.143 15:47:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.143 15:47:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.143 15:47:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.143 15:47:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.143 15:47:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.143 15:47:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.143 15:47:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.143 15:47:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.143 15:47:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.143 15:47:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.143 15:47:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.143 15:47:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.143 15:47:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.143 15:47:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.143 15:47:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.144 15:47:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.144 15:47:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.144 15:47:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.144 15:47:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.144 15:47:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.144 15:47:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.144 15:47:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.144 15:47:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.144 15:47:20 -- paths/export.sh@5 -- # export PATH 00:04:41.144 15:47:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.144 15:47:20 -- nvmf/common.sh@47 -- # : 0 00:04:41.144 15:47:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.144 15:47:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.144 15:47:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.144 15:47:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.144 15:47:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.144 15:47:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.144 15:47:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.144 15:47:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.144 15:47:20 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:41.144 15:47:20 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.144 15:47:20 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.144 15:47:20 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.144 15:47:20 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.144 15:47:20 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:41.144 15:47:20 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:41.144 15:47:20 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:41.144 15:47:20 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:41.144 15:47:20 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:41.144 15:47:20 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:41.144 15:47:20 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:41.144 15:47:20 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:41.144 15:47:20 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:41.144 15:47:20 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.144 15:47:20 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:41.144 INFO: JSON configuration test init 00:04:41.144 15:47:20 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:41.144 15:47:20 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:41.144 15:47:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:41.144 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:41.144 15:47:20 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:41.144 15:47:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:41.144 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:41.144 15:47:20 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:41.144 15:47:20 -- json_config/common.sh@9 -- # local app=target 00:04:41.144 15:47:20 -- json_config/common.sh@10 -- # shift 00:04:41.144 15:47:20 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.144 15:47:20 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.144 15:47:20 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.144 15:47:20 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.144 15:47:20 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.144 15:47:20 -- json_config/common.sh@22 -- # app_pid["$app"]=2261335 00:04:41.144 15:47:20 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.144 Waiting for target to run... 00:04:41.144 15:47:20 -- json_config/common.sh@25 -- # waitforlisten 2261335 /var/tmp/spdk_tgt.sock 00:04:41.144 15:47:20 -- common/autotest_common.sh@817 -- # '[' -z 2261335 ']' 00:04:41.144 15:47:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.144 15:47:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:41.144 15:47:20 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:41.144 15:47:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.144 15:47:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:41.144 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:41.144 [2024-04-26 15:47:20.792726] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:41.144 [2024-04-26 15:47:20.792828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2261335 ] 00:04:41.403 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.661 [2024-04-26 15:47:21.270584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.919 [2024-04-26 15:47:21.501028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.919 15:47:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:41.920 15:47:21 -- common/autotest_common.sh@850 -- # return 0 00:04:41.920 15:47:21 -- json_config/common.sh@26 -- # echo '' 00:04:41.920 00:04:41.920 15:47:21 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:41.920 15:47:21 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:41.920 15:47:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:41.920 15:47:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.920 15:47:21 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:41.920 15:47:21 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:41.920 15:47:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:41.920 15:47:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.920 15:47:21 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:41.920 15:47:21 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:41.920 15:47:21 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:46.103 15:47:25 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:46.103 15:47:25 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:46.103 15:47:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:46.103 15:47:25 -- common/autotest_common.sh@10 -- # set +x 00:04:46.103 15:47:25 -- json_config/json_config.sh@45 -- # local ret=0 00:04:46.103 15:47:25 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:46.103 15:47:25 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:46.103 15:47:25 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:46.103 15:47:25 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:46.103 15:47:25 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:46.103 15:47:25 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:46.103 15:47:25 -- json_config/json_config.sh@48 -- # local get_types 00:04:46.103 15:47:25 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:46.103 15:47:25 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:46.103 15:47:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:46.103 15:47:25 -- common/autotest_common.sh@10 -- # set +x 00:04:46.103 15:47:25 -- json_config/json_config.sh@55 -- # return 0 00:04:46.103 15:47:25 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:46.103 15:47:25 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:46.103 15:47:25 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:46.103 15:47:25 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:46.103 15:47:25 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:46.103 15:47:25 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:46.103 15:47:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:46.103 15:47:25 -- common/autotest_common.sh@10 -- # set +x 00:04:46.103 15:47:25 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:46.103 15:47:25 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:46.103 15:47:25 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:46.103 15:47:25 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.103 15:47:25 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:46.103 MallocForNvmf0 00:04:46.103 15:47:25 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.103 15:47:25 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.361 MallocForNvmf1 00:04:46.361 15:47:25 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.361 15:47:25 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.619 [2024-04-26 15:47:26.103119] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.619 15:47:26 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.619 15:47:26 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.619 15:47:26 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.619 15:47:26 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.877 15:47:26 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.877 15:47:26 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.135 15:47:26 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.135 15:47:26 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.135 [2024-04-26 15:47:26.777331] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.135 15:47:26 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:47.135 15:47:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.135 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:04:47.394 15:47:26 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:47.394 15:47:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.394 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:04:47.394 15:47:26 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:47.394 15:47:26 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.394 15:47:26 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.394 MallocBdevForConfigChangeCheck 00:04:47.394 15:47:27 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:47.394 15:47:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.394 15:47:27 -- common/autotest_common.sh@10 -- # set +x 00:04:47.662 15:47:27 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:47.662 15:47:27 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.923 15:47:27 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:47.923 INFO: shutting down applications... 00:04:47.923 15:47:27 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:47.923 15:47:27 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:47.923 15:47:27 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:47.923 15:47:27 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:49.296 Calling clear_iscsi_subsystem 00:04:49.296 Calling clear_nvmf_subsystem 00:04:49.296 Calling clear_nbd_subsystem 00:04:49.296 Calling clear_ublk_subsystem 00:04:49.296 Calling clear_vhost_blk_subsystem 00:04:49.296 Calling clear_vhost_scsi_subsystem 00:04:49.296 Calling clear_bdev_subsystem 00:04:49.296 15:47:28 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:49.296 15:47:28 -- json_config/json_config.sh@343 -- # count=100 00:04:49.296 15:47:28 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:49.296 15:47:28 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.296 15:47:28 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:49.296 15:47:28 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:49.862 15:47:29 -- json_config/json_config.sh@345 -- # break 00:04:49.863 15:47:29 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:49.863 15:47:29 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:49.863 15:47:29 -- json_config/common.sh@31 -- # local app=target 00:04:49.863 15:47:29 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.863 15:47:29 -- json_config/common.sh@35 -- # [[ -n 2261335 ]] 00:04:49.863 15:47:29 -- json_config/common.sh@38 -- # kill -SIGINT 2261335 00:04:49.863 15:47:29 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.863 15:47:29 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.863 15:47:29 -- json_config/common.sh@41 -- # kill -0 2261335 00:04:49.863 15:47:29 -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.121 15:47:29 -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.121 15:47:29 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.121 15:47:29 -- json_config/common.sh@41 -- # kill -0 2261335 00:04:50.121 15:47:29 -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.687 15:47:30 -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.687 15:47:30 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.687 15:47:30 -- json_config/common.sh@41 -- # kill -0 2261335 00:04:50.687 15:47:30 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.687 15:47:30 -- json_config/common.sh@43 -- # break 00:04:50.687 15:47:30 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.687 15:47:30 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.687 SPDK target shutdown done 00:04:50.687 15:47:30 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:50.687 INFO: relaunching applications... 00:04:50.687 15:47:30 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.687 15:47:30 -- json_config/common.sh@9 -- # local app=target 00:04:50.687 15:47:30 -- json_config/common.sh@10 -- # shift 00:04:50.687 15:47:30 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.687 15:47:30 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.687 15:47:30 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.687 15:47:30 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.687 15:47:30 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.687 15:47:30 -- json_config/common.sh@22 -- # app_pid["$app"]=2263064 00:04:50.687 15:47:30 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.687 Waiting for target to run... 00:04:50.687 15:47:30 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.687 15:47:30 -- json_config/common.sh@25 -- # waitforlisten 2263064 /var/tmp/spdk_tgt.sock 00:04:50.687 15:47:30 -- common/autotest_common.sh@817 -- # '[' -z 2263064 ']' 00:04:50.687 15:47:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.687 15:47:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:50.687 15:47:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.687 15:47:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:50.687 15:47:30 -- common/autotest_common.sh@10 -- # set +x 00:04:50.687 [2024-04-26 15:47:30.347060] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:50.687 [2024-04-26 15:47:30.347179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2263064 ] 00:04:50.945 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.204 [2024-04-26 15:47:30.834609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.461 [2024-04-26 15:47:31.050383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.690 [2024-04-26 15:47:34.835754] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.691 [2024-04-26 15:47:34.868143] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.691 15:47:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:55.691 15:47:35 -- common/autotest_common.sh@850 -- # return 0 00:04:55.691 15:47:35 -- json_config/common.sh@26 -- # echo '' 00:04:55.691 00:04:55.691 15:47:35 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:55.691 15:47:35 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:55.691 INFO: Checking if target configuration is the same... 00:04:55.691 15:47:35 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.691 15:47:35 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:55.691 15:47:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:55.691 + '[' 2 -ne 2 ']' 00:04:55.691 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:55.691 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:55.691 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:55.691 +++ basename /dev/fd/62 00:04:55.691 ++ mktemp /tmp/62.XXX 00:04:55.691 + tmp_file_1=/tmp/62.aTa 00:04:55.691 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.691 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:55.691 + tmp_file_2=/tmp/spdk_tgt_config.json.ZeQ 00:04:55.691 + ret=0 00:04:55.691 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.979 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:55.979 + diff -u /tmp/62.aTa /tmp/spdk_tgt_config.json.ZeQ 00:04:55.979 + echo 'INFO: JSON config files are the same' 00:04:55.979 INFO: JSON config files are the same 00:04:55.979 + rm /tmp/62.aTa /tmp/spdk_tgt_config.json.ZeQ 00:04:55.979 + exit 0 00:04:55.979 15:47:35 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:55.979 15:47:35 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:55.979 INFO: changing configuration and checking if this can be detected... 00:04:55.979 15:47:35 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:55.979 15:47:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:56.237 15:47:35 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.237 15:47:35 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:56.237 15:47:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.237 + '[' 2 -ne 2 ']' 00:04:56.237 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:56.237 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:56.237 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:56.237 +++ basename /dev/fd/62 00:04:56.237 ++ mktemp /tmp/62.XXX 00:04:56.237 + tmp_file_1=/tmp/62.nGE 00:04:56.237 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.237 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:56.237 + tmp_file_2=/tmp/spdk_tgt_config.json.olt 00:04:56.237 + ret=0 00:04:56.237 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.495 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.495 + diff -u /tmp/62.nGE /tmp/spdk_tgt_config.json.olt 00:04:56.495 + ret=1 00:04:56.495 + echo '=== Start of file: /tmp/62.nGE ===' 00:04:56.495 + cat /tmp/62.nGE 00:04:56.495 + echo '=== End of file: /tmp/62.nGE ===' 00:04:56.495 + echo '' 00:04:56.495 + echo '=== Start of file: /tmp/spdk_tgt_config.json.olt ===' 00:04:56.495 + cat /tmp/spdk_tgt_config.json.olt 00:04:56.495 + echo '=== End of file: /tmp/spdk_tgt_config.json.olt ===' 00:04:56.495 + echo '' 00:04:56.495 + rm /tmp/62.nGE /tmp/spdk_tgt_config.json.olt 00:04:56.495 + exit 1 00:04:56.495 15:47:36 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:56.495 INFO: configuration change detected. 00:04:56.495 15:47:36 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:56.495 15:47:36 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:56.495 15:47:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:56.495 15:47:36 -- common/autotest_common.sh@10 -- # set +x 00:04:56.495 15:47:36 -- json_config/json_config.sh@307 -- # local ret=0 00:04:56.495 15:47:36 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:56.495 15:47:36 -- json_config/json_config.sh@317 -- # [[ -n 2263064 ]] 00:04:56.495 15:47:36 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:56.495 15:47:36 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:56.495 15:47:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:56.495 15:47:36 -- common/autotest_common.sh@10 -- # set +x 00:04:56.495 15:47:36 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:56.495 15:47:36 -- json_config/json_config.sh@193 -- # uname -s 00:04:56.495 15:47:36 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:56.495 15:47:36 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:56.495 15:47:36 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:56.495 15:47:36 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:56.495 15:47:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:56.495 15:47:36 -- common/autotest_common.sh@10 -- # set +x 00:04:56.495 15:47:36 -- json_config/json_config.sh@323 -- # killprocess 2263064 00:04:56.495 15:47:36 -- common/autotest_common.sh@936 -- # '[' -z 2263064 ']' 00:04:56.495 15:47:36 -- common/autotest_common.sh@940 -- # kill -0 2263064 00:04:56.495 15:47:36 -- common/autotest_common.sh@941 -- # uname 00:04:56.495 15:47:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.495 15:47:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2263064 00:04:56.495 15:47:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.495 15:47:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.495 15:47:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2263064' 00:04:56.495 killing process with pid 2263064 00:04:56.495 15:47:36 -- common/autotest_common.sh@955 -- # kill 2263064 00:04:56.495 15:47:36 -- common/autotest_common.sh@960 -- # wait 2263064 00:04:59.021 15:47:38 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:59.021 15:47:38 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:59.021 15:47:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:59.021 15:47:38 -- common/autotest_common.sh@10 -- # set +x 00:04:59.021 15:47:38 -- json_config/json_config.sh@328 -- # return 0 00:04:59.021 15:47:38 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:59.021 INFO: Success 00:04:59.021 00:04:59.021 real 0m17.718s 00:04:59.021 user 0m18.359s 00:04:59.021 sys 0m2.298s 00:04:59.021 15:47:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.021 15:47:38 -- common/autotest_common.sh@10 -- # set +x 00:04:59.021 ************************************ 00:04:59.021 END TEST json_config 00:04:59.021 ************************************ 00:04:59.021 15:47:38 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.021 15:47:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.021 15:47:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.021 15:47:38 -- common/autotest_common.sh@10 -- # set +x 00:04:59.021 ************************************ 00:04:59.021 START TEST json_config_extra_key 00:04:59.021 ************************************ 00:04:59.021 15:47:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.021 15:47:38 -- nvmf/common.sh@7 -- # uname -s 00:04:59.021 15:47:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.021 15:47:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.021 15:47:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.021 15:47:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.021 15:47:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.021 15:47:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.021 15:47:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.021 15:47:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.021 15:47:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.021 15:47:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.021 15:47:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.021 15:47:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.021 15:47:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.021 15:47:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.021 15:47:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.021 15:47:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.021 15:47:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.021 15:47:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.021 15:47:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.021 15:47:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.021 15:47:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.021 15:47:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.021 15:47:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.021 15:47:38 -- paths/export.sh@5 -- # export PATH 00:04:59.021 15:47:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.021 15:47:38 -- nvmf/common.sh@47 -- # : 0 00:04:59.021 15:47:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.021 15:47:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.021 15:47:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.021 15:47:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.021 15:47:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.021 15:47:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.021 15:47:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.021 15:47:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:59.021 INFO: launching applications... 00:04:59.021 15:47:38 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.021 15:47:38 -- json_config/common.sh@9 -- # local app=target 00:04:59.021 15:47:38 -- json_config/common.sh@10 -- # shift 00:04:59.021 15:47:38 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.021 15:47:38 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.021 15:47:38 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.021 15:47:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.021 15:47:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.021 15:47:38 -- json_config/common.sh@22 -- # app_pid["$app"]=2264575 00:04:59.021 15:47:38 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.021 Waiting for target to run... 00:04:59.021 15:47:38 -- json_config/common.sh@25 -- # waitforlisten 2264575 /var/tmp/spdk_tgt.sock 00:04:59.021 15:47:38 -- common/autotest_common.sh@817 -- # '[' -z 2264575 ']' 00:04:59.021 15:47:38 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:59.021 15:47:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.021 15:47:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:59.021 15:47:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.021 15:47:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:59.021 15:47:38 -- common/autotest_common.sh@10 -- # set +x 00:04:59.021 [2024-04-26 15:47:38.672117] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:04:59.021 [2024-04-26 15:47:38.672218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264575 ] 00:04:59.279 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.537 [2024-04-26 15:47:39.159728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.795 [2024-04-26 15:47:39.375784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.730 15:47:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:00.730 15:47:40 -- common/autotest_common.sh@850 -- # return 0 00:05:00.730 15:47:40 -- json_config/common.sh@26 -- # echo '' 00:05:00.730 00:05:00.730 15:47:40 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.730 INFO: shutting down applications... 00:05:00.730 15:47:40 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.730 15:47:40 -- json_config/common.sh@31 -- # local app=target 00:05:00.730 15:47:40 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.730 15:47:40 -- json_config/common.sh@35 -- # [[ -n 2264575 ]] 00:05:00.730 15:47:40 -- json_config/common.sh@38 -- # kill -SIGINT 2264575 00:05:00.730 15:47:40 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.730 15:47:40 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.730 15:47:40 -- json_config/common.sh@41 -- # kill -0 2264575 00:05:00.730 15:47:40 -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.296 15:47:40 -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.296 15:47:40 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.296 15:47:40 -- json_config/common.sh@41 -- # kill -0 2264575 00:05:01.296 15:47:40 -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.554 15:47:41 -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.554 15:47:41 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.554 15:47:41 -- json_config/common.sh@41 -- # kill -0 2264575 00:05:01.554 15:47:41 -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.119 15:47:41 -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.119 15:47:41 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.119 15:47:41 -- json_config/common.sh@41 -- # kill -0 2264575 00:05:02.119 15:47:41 -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.685 15:47:42 -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.685 15:47:42 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.685 15:47:42 -- json_config/common.sh@41 -- # kill -0 2264575 00:05:02.685 15:47:42 -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.250 15:47:42 -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.250 15:47:42 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.250 15:47:42 -- json_config/common.sh@41 -- # kill -0 2264575 00:05:03.250 15:47:42 -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.816 15:47:43 -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.816 15:47:43 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.816 15:47:43 -- json_config/common.sh@41 -- # kill -0 2264575 00:05:03.816 15:47:43 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.816 15:47:43 -- json_config/common.sh@43 -- # break 00:05:03.816 15:47:43 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.816 15:47:43 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.816 SPDK target shutdown done 00:05:03.816 15:47:43 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:03.816 Success 00:05:03.816 00:05:03.816 real 0m4.713s 00:05:03.816 user 0m4.091s 00:05:03.816 sys 0m0.699s 00:05:03.816 15:47:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.816 15:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:03.816 ************************************ 00:05:03.816 END TEST json_config_extra_key 00:05:03.816 ************************************ 00:05:03.816 15:47:43 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.816 15:47:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.816 15:47:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.816 15:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:03.816 ************************************ 00:05:03.816 START TEST alias_rpc 00:05:03.817 ************************************ 00:05:03.817 15:47:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:03.817 * Looking for test storage... 00:05:03.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:03.817 15:47:43 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:03.817 15:47:43 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2265520 00:05:03.817 15:47:43 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2265520 00:05:03.817 15:47:43 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.817 15:47:43 -- common/autotest_common.sh@817 -- # '[' -z 2265520 ']' 00:05:03.817 15:47:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.817 15:47:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.817 15:47:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.817 15:47:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.817 15:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:04.075 [2024-04-26 15:47:43.535631] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:04.076 [2024-04-26 15:47:43.535724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2265520 ] 00:05:04.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.076 [2024-04-26 15:47:43.639978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.334 [2024-04-26 15:47:43.858463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.270 15:47:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.270 15:47:44 -- common/autotest_common.sh@850 -- # return 0 00:05:05.270 15:47:44 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:05.528 15:47:44 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2265520 00:05:05.528 15:47:44 -- common/autotest_common.sh@936 -- # '[' -z 2265520 ']' 00:05:05.528 15:47:44 -- common/autotest_common.sh@940 -- # kill -0 2265520 00:05:05.528 15:47:44 -- common/autotest_common.sh@941 -- # uname 00:05:05.528 15:47:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.528 15:47:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2265520 00:05:05.528 15:47:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.528 15:47:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.528 15:47:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2265520' 00:05:05.528 killing process with pid 2265520 00:05:05.528 15:47:45 -- common/autotest_common.sh@955 -- # kill 2265520 00:05:05.528 15:47:45 -- common/autotest_common.sh@960 -- # wait 2265520 00:05:08.069 00:05:08.069 real 0m4.036s 00:05:08.069 user 0m4.054s 00:05:08.069 sys 0m0.507s 00:05:08.069 15:47:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.069 15:47:47 -- common/autotest_common.sh@10 -- # set +x 00:05:08.069 ************************************ 00:05:08.069 END TEST alias_rpc 00:05:08.069 ************************************ 00:05:08.069 15:47:47 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:08.069 15:47:47 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.069 15:47:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.069 15:47:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.069 15:47:47 -- common/autotest_common.sh@10 -- # set +x 00:05:08.069 ************************************ 00:05:08.069 START TEST spdkcli_tcp 00:05:08.069 ************************************ 00:05:08.069 15:47:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:08.069 * Looking for test storage... 00:05:08.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:08.069 15:47:47 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:08.069 15:47:47 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:08.069 15:47:47 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:08.069 15:47:47 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.069 15:47:47 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.069 15:47:47 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.070 15:47:47 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.070 15:47:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:08.070 15:47:47 -- common/autotest_common.sh@10 -- # set +x 00:05:08.070 15:47:47 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2266307 00:05:08.070 15:47:47 -- spdkcli/tcp.sh@27 -- # waitforlisten 2266307 00:05:08.070 15:47:47 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.070 15:47:47 -- common/autotest_common.sh@817 -- # '[' -z 2266307 ']' 00:05:08.070 15:47:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.070 15:47:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.070 15:47:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.070 15:47:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.070 15:47:47 -- common/autotest_common.sh@10 -- # set +x 00:05:08.070 [2024-04-26 15:47:47.716178] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:08.070 [2024-04-26 15:47:47.716262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266307 ] 00:05:08.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.328 [2024-04-26 15:47:47.818691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.587 [2024-04-26 15:47:48.036114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.587 [2024-04-26 15:47:48.036122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.522 15:47:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.522 15:47:48 -- common/autotest_common.sh@850 -- # return 0 00:05:09.522 15:47:48 -- spdkcli/tcp.sh@31 -- # socat_pid=2266544 00:05:09.522 15:47:48 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.522 15:47:48 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:09.522 [ 00:05:09.522 "bdev_malloc_delete", 00:05:09.522 "bdev_malloc_create", 00:05:09.522 "bdev_null_resize", 00:05:09.522 "bdev_null_delete", 00:05:09.522 "bdev_null_create", 00:05:09.522 "bdev_nvme_cuse_unregister", 00:05:09.522 "bdev_nvme_cuse_register", 00:05:09.522 "bdev_opal_new_user", 00:05:09.522 "bdev_opal_set_lock_state", 00:05:09.522 "bdev_opal_delete", 00:05:09.522 "bdev_opal_get_info", 00:05:09.522 "bdev_opal_create", 00:05:09.522 "bdev_nvme_opal_revert", 00:05:09.522 "bdev_nvme_opal_init", 00:05:09.522 "bdev_nvme_send_cmd", 00:05:09.522 "bdev_nvme_get_path_iostat", 00:05:09.522 "bdev_nvme_get_mdns_discovery_info", 00:05:09.522 "bdev_nvme_stop_mdns_discovery", 00:05:09.522 "bdev_nvme_start_mdns_discovery", 00:05:09.522 "bdev_nvme_set_multipath_policy", 00:05:09.522 "bdev_nvme_set_preferred_path", 00:05:09.522 "bdev_nvme_get_io_paths", 00:05:09.522 "bdev_nvme_remove_error_injection", 00:05:09.522 "bdev_nvme_add_error_injection", 00:05:09.522 "bdev_nvme_get_discovery_info", 00:05:09.522 "bdev_nvme_stop_discovery", 00:05:09.522 "bdev_nvme_start_discovery", 00:05:09.522 "bdev_nvme_get_controller_health_info", 00:05:09.522 "bdev_nvme_disable_controller", 00:05:09.522 "bdev_nvme_enable_controller", 00:05:09.522 "bdev_nvme_reset_controller", 00:05:09.522 "bdev_nvme_get_transport_statistics", 00:05:09.522 "bdev_nvme_apply_firmware", 00:05:09.522 "bdev_nvme_detach_controller", 00:05:09.522 "bdev_nvme_get_controllers", 00:05:09.522 "bdev_nvme_attach_controller", 00:05:09.522 "bdev_nvme_set_hotplug", 00:05:09.522 "bdev_nvme_set_options", 00:05:09.522 "bdev_passthru_delete", 00:05:09.522 "bdev_passthru_create", 00:05:09.522 "bdev_lvol_grow_lvstore", 00:05:09.522 "bdev_lvol_get_lvols", 00:05:09.522 "bdev_lvol_get_lvstores", 00:05:09.522 "bdev_lvol_delete", 00:05:09.522 "bdev_lvol_set_read_only", 00:05:09.522 "bdev_lvol_resize", 00:05:09.522 "bdev_lvol_decouple_parent", 00:05:09.522 "bdev_lvol_inflate", 00:05:09.522 "bdev_lvol_rename", 00:05:09.522 "bdev_lvol_clone_bdev", 00:05:09.522 "bdev_lvol_clone", 00:05:09.522 "bdev_lvol_snapshot", 00:05:09.522 "bdev_lvol_create", 00:05:09.522 "bdev_lvol_delete_lvstore", 00:05:09.522 "bdev_lvol_rename_lvstore", 00:05:09.522 "bdev_lvol_create_lvstore", 00:05:09.522 "bdev_raid_set_options", 00:05:09.522 "bdev_raid_remove_base_bdev", 00:05:09.522 "bdev_raid_add_base_bdev", 00:05:09.522 "bdev_raid_delete", 00:05:09.522 "bdev_raid_create", 00:05:09.522 "bdev_raid_get_bdevs", 00:05:09.522 "bdev_error_inject_error", 00:05:09.522 "bdev_error_delete", 00:05:09.522 "bdev_error_create", 00:05:09.522 "bdev_split_delete", 00:05:09.522 "bdev_split_create", 00:05:09.522 "bdev_delay_delete", 00:05:09.522 "bdev_delay_create", 00:05:09.522 "bdev_delay_update_latency", 00:05:09.522 "bdev_zone_block_delete", 00:05:09.522 "bdev_zone_block_create", 00:05:09.522 "blobfs_create", 00:05:09.522 "blobfs_detect", 00:05:09.522 "blobfs_set_cache_size", 00:05:09.522 "bdev_aio_delete", 00:05:09.522 "bdev_aio_rescan", 00:05:09.522 "bdev_aio_create", 00:05:09.522 "bdev_ftl_set_property", 00:05:09.522 "bdev_ftl_get_properties", 00:05:09.522 "bdev_ftl_get_stats", 00:05:09.522 "bdev_ftl_unmap", 00:05:09.522 "bdev_ftl_unload", 00:05:09.522 "bdev_ftl_delete", 00:05:09.522 "bdev_ftl_load", 00:05:09.522 "bdev_ftl_create", 00:05:09.522 "bdev_virtio_attach_controller", 00:05:09.522 "bdev_virtio_scsi_get_devices", 00:05:09.522 "bdev_virtio_detach_controller", 00:05:09.522 "bdev_virtio_blk_set_hotplug", 00:05:09.522 "bdev_iscsi_delete", 00:05:09.522 "bdev_iscsi_create", 00:05:09.522 "bdev_iscsi_set_options", 00:05:09.522 "accel_error_inject_error", 00:05:09.522 "ioat_scan_accel_module", 00:05:09.522 "dsa_scan_accel_module", 00:05:09.522 "iaa_scan_accel_module", 00:05:09.522 "vfu_virtio_create_scsi_endpoint", 00:05:09.522 "vfu_virtio_scsi_remove_target", 00:05:09.522 "vfu_virtio_scsi_add_target", 00:05:09.522 "vfu_virtio_create_blk_endpoint", 00:05:09.522 "vfu_virtio_delete_endpoint", 00:05:09.522 "keyring_file_remove_key", 00:05:09.522 "keyring_file_add_key", 00:05:09.522 "iscsi_get_histogram", 00:05:09.522 "iscsi_enable_histogram", 00:05:09.522 "iscsi_set_options", 00:05:09.522 "iscsi_get_auth_groups", 00:05:09.522 "iscsi_auth_group_remove_secret", 00:05:09.522 "iscsi_auth_group_add_secret", 00:05:09.522 "iscsi_delete_auth_group", 00:05:09.522 "iscsi_create_auth_group", 00:05:09.522 "iscsi_set_discovery_auth", 00:05:09.522 "iscsi_get_options", 00:05:09.522 "iscsi_target_node_request_logout", 00:05:09.522 "iscsi_target_node_set_redirect", 00:05:09.522 "iscsi_target_node_set_auth", 00:05:09.522 "iscsi_target_node_add_lun", 00:05:09.522 "iscsi_get_stats", 00:05:09.522 "iscsi_get_connections", 00:05:09.522 "iscsi_portal_group_set_auth", 00:05:09.522 "iscsi_start_portal_group", 00:05:09.522 "iscsi_delete_portal_group", 00:05:09.522 "iscsi_create_portal_group", 00:05:09.522 "iscsi_get_portal_groups", 00:05:09.522 "iscsi_delete_target_node", 00:05:09.522 "iscsi_target_node_remove_pg_ig_maps", 00:05:09.522 "iscsi_target_node_add_pg_ig_maps", 00:05:09.522 "iscsi_create_target_node", 00:05:09.522 "iscsi_get_target_nodes", 00:05:09.522 "iscsi_delete_initiator_group", 00:05:09.522 "iscsi_initiator_group_remove_initiators", 00:05:09.522 "iscsi_initiator_group_add_initiators", 00:05:09.522 "iscsi_create_initiator_group", 00:05:09.522 "iscsi_get_initiator_groups", 00:05:09.522 "nvmf_set_crdt", 00:05:09.522 "nvmf_set_config", 00:05:09.522 "nvmf_set_max_subsystems", 00:05:09.522 "nvmf_subsystem_get_listeners", 00:05:09.522 "nvmf_subsystem_get_qpairs", 00:05:09.523 "nvmf_subsystem_get_controllers", 00:05:09.523 "nvmf_get_stats", 00:05:09.523 "nvmf_get_transports", 00:05:09.523 "nvmf_create_transport", 00:05:09.523 "nvmf_get_targets", 00:05:09.523 "nvmf_delete_target", 00:05:09.523 "nvmf_create_target", 00:05:09.523 "nvmf_subsystem_allow_any_host", 00:05:09.523 "nvmf_subsystem_remove_host", 00:05:09.523 "nvmf_subsystem_add_host", 00:05:09.523 "nvmf_ns_remove_host", 00:05:09.523 "nvmf_ns_add_host", 00:05:09.523 "nvmf_subsystem_remove_ns", 00:05:09.523 "nvmf_subsystem_add_ns", 00:05:09.523 "nvmf_subsystem_listener_set_ana_state", 00:05:09.523 "nvmf_discovery_get_referrals", 00:05:09.523 "nvmf_discovery_remove_referral", 00:05:09.523 "nvmf_discovery_add_referral", 00:05:09.523 "nvmf_subsystem_remove_listener", 00:05:09.523 "nvmf_subsystem_add_listener", 00:05:09.523 "nvmf_delete_subsystem", 00:05:09.523 "nvmf_create_subsystem", 00:05:09.523 "nvmf_get_subsystems", 00:05:09.523 "env_dpdk_get_mem_stats", 00:05:09.523 "nbd_get_disks", 00:05:09.523 "nbd_stop_disk", 00:05:09.523 "nbd_start_disk", 00:05:09.523 "ublk_recover_disk", 00:05:09.523 "ublk_get_disks", 00:05:09.523 "ublk_stop_disk", 00:05:09.523 "ublk_start_disk", 00:05:09.523 "ublk_destroy_target", 00:05:09.523 "ublk_create_target", 00:05:09.523 "virtio_blk_create_transport", 00:05:09.523 "virtio_blk_get_transports", 00:05:09.523 "vhost_controller_set_coalescing", 00:05:09.523 "vhost_get_controllers", 00:05:09.523 "vhost_delete_controller", 00:05:09.523 "vhost_create_blk_controller", 00:05:09.523 "vhost_scsi_controller_remove_target", 00:05:09.523 "vhost_scsi_controller_add_target", 00:05:09.523 "vhost_start_scsi_controller", 00:05:09.523 "vhost_create_scsi_controller", 00:05:09.523 "thread_set_cpumask", 00:05:09.523 "framework_get_scheduler", 00:05:09.523 "framework_set_scheduler", 00:05:09.523 "framework_get_reactors", 00:05:09.523 "thread_get_io_channels", 00:05:09.523 "thread_get_pollers", 00:05:09.523 "thread_get_stats", 00:05:09.523 "framework_monitor_context_switch", 00:05:09.523 "spdk_kill_instance", 00:05:09.523 "log_enable_timestamps", 00:05:09.523 "log_get_flags", 00:05:09.523 "log_clear_flag", 00:05:09.523 "log_set_flag", 00:05:09.523 "log_get_level", 00:05:09.523 "log_set_level", 00:05:09.523 "log_get_print_level", 00:05:09.523 "log_set_print_level", 00:05:09.523 "framework_enable_cpumask_locks", 00:05:09.523 "framework_disable_cpumask_locks", 00:05:09.523 "framework_wait_init", 00:05:09.523 "framework_start_init", 00:05:09.523 "scsi_get_devices", 00:05:09.523 "bdev_get_histogram", 00:05:09.523 "bdev_enable_histogram", 00:05:09.523 "bdev_set_qos_limit", 00:05:09.523 "bdev_set_qd_sampling_period", 00:05:09.523 "bdev_get_bdevs", 00:05:09.523 "bdev_reset_iostat", 00:05:09.523 "bdev_get_iostat", 00:05:09.523 "bdev_examine", 00:05:09.523 "bdev_wait_for_examine", 00:05:09.523 "bdev_set_options", 00:05:09.523 "notify_get_notifications", 00:05:09.523 "notify_get_types", 00:05:09.523 "accel_get_stats", 00:05:09.523 "accel_set_options", 00:05:09.523 "accel_set_driver", 00:05:09.523 "accel_crypto_key_destroy", 00:05:09.523 "accel_crypto_keys_get", 00:05:09.523 "accel_crypto_key_create", 00:05:09.523 "accel_assign_opc", 00:05:09.523 "accel_get_module_info", 00:05:09.523 "accel_get_opc_assignments", 00:05:09.523 "vmd_rescan", 00:05:09.523 "vmd_remove_device", 00:05:09.523 "vmd_enable", 00:05:09.523 "sock_get_default_impl", 00:05:09.523 "sock_set_default_impl", 00:05:09.523 "sock_impl_set_options", 00:05:09.523 "sock_impl_get_options", 00:05:09.523 "iobuf_get_stats", 00:05:09.523 "iobuf_set_options", 00:05:09.523 "keyring_get_keys", 00:05:09.523 "framework_get_pci_devices", 00:05:09.523 "framework_get_config", 00:05:09.523 "framework_get_subsystems", 00:05:09.523 "vfu_tgt_set_base_path", 00:05:09.523 "trace_get_info", 00:05:09.523 "trace_get_tpoint_group_mask", 00:05:09.523 "trace_disable_tpoint_group", 00:05:09.523 "trace_enable_tpoint_group", 00:05:09.523 "trace_clear_tpoint_mask", 00:05:09.523 "trace_set_tpoint_mask", 00:05:09.523 "spdk_get_version", 00:05:09.523 "rpc_get_methods" 00:05:09.523 ] 00:05:09.523 15:47:49 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:09.523 15:47:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:09.523 15:47:49 -- common/autotest_common.sh@10 -- # set +x 00:05:09.523 15:47:49 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:09.523 15:47:49 -- spdkcli/tcp.sh@38 -- # killprocess 2266307 00:05:09.523 15:47:49 -- common/autotest_common.sh@936 -- # '[' -z 2266307 ']' 00:05:09.523 15:47:49 -- common/autotest_common.sh@940 -- # kill -0 2266307 00:05:09.523 15:47:49 -- common/autotest_common.sh@941 -- # uname 00:05:09.523 15:47:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:09.523 15:47:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2266307 00:05:09.782 15:47:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:09.782 15:47:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:09.782 15:47:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2266307' 00:05:09.782 killing process with pid 2266307 00:05:09.782 15:47:49 -- common/autotest_common.sh@955 -- # kill 2266307 00:05:09.782 15:47:49 -- common/autotest_common.sh@960 -- # wait 2266307 00:05:12.312 00:05:12.312 real 0m4.133s 00:05:12.312 user 0m7.330s 00:05:12.312 sys 0m0.555s 00:05:12.312 15:47:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.312 15:47:51 -- common/autotest_common.sh@10 -- # set +x 00:05:12.312 ************************************ 00:05:12.312 END TEST spdkcli_tcp 00:05:12.312 ************************************ 00:05:12.312 15:47:51 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.312 15:47:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.312 15:47:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.312 15:47:51 -- common/autotest_common.sh@10 -- # set +x 00:05:12.312 ************************************ 00:05:12.312 START TEST dpdk_mem_utility 00:05:12.312 ************************************ 00:05:12.312 15:47:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.312 * Looking for test storage... 00:05:12.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:12.312 15:47:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.312 15:47:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2267076 00:05:12.312 15:47:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2267076 00:05:12.312 15:47:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.312 15:47:51 -- common/autotest_common.sh@817 -- # '[' -z 2267076 ']' 00:05:12.312 15:47:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.312 15:47:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.312 15:47:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.312 15:47:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.312 15:47:51 -- common/autotest_common.sh@10 -- # set +x 00:05:12.571 [2024-04-26 15:47:51.997542] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:12.571 [2024-04-26 15:47:51.997625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267076 ] 00:05:12.571 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.571 [2024-04-26 15:47:52.100048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.830 [2024-04-26 15:47:52.312949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.769 15:47:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.769 15:47:53 -- common/autotest_common.sh@850 -- # return 0 00:05:13.769 15:47:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.769 15:47:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.769 15:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.769 15:47:53 -- common/autotest_common.sh@10 -- # set +x 00:05:13.769 { 00:05:13.769 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.769 } 00:05:13.769 15:47:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:13.769 15:47:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.769 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:13.769 1 heaps totaling size 820.000000 MiB 00:05:13.769 size: 820.000000 MiB heap id: 0 00:05:13.769 end heaps---------- 00:05:13.769 8 mempools totaling size 598.116089 MiB 00:05:13.769 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.769 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.769 size: 84.521057 MiB name: bdev_io_2267076 00:05:13.769 size: 51.011292 MiB name: evtpool_2267076 00:05:13.769 size: 50.003479 MiB name: msgpool_2267076 00:05:13.769 size: 21.763794 MiB name: PDU_Pool 00:05:13.769 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.769 size: 0.026123 MiB name: Session_Pool 00:05:13.769 end mempools------- 00:05:13.769 6 memzones totaling size 4.142822 MiB 00:05:13.769 size: 1.000366 MiB name: RG_ring_0_2267076 00:05:13.769 size: 1.000366 MiB name: RG_ring_1_2267076 00:05:13.769 size: 1.000366 MiB name: RG_ring_4_2267076 00:05:13.769 size: 1.000366 MiB name: RG_ring_5_2267076 00:05:13.769 size: 0.125366 MiB name: RG_ring_2_2267076 00:05:13.769 size: 0.015991 MiB name: RG_ring_3_2267076 00:05:13.769 end memzones------- 00:05:13.769 15:47:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.769 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:13.769 list of free elements. size: 18.514832 MiB 00:05:13.769 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:13.769 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:13.769 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:13.769 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:13.769 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:13.769 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:13.769 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:13.769 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:13.769 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:13.769 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:13.769 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:13.769 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:13.769 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:13.769 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:13.769 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:13.769 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:13.769 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:13.769 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:13.769 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:13.769 list of standard malloc elements. size: 199.220764 MiB 00:05:13.769 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:13.769 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:13.769 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:13.769 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:13.769 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:13.769 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:13.769 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:13.769 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:13.769 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:13.769 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:13.769 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:13.769 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:13.769 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:13.769 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:13.769 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:13.769 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:13.769 list of memzone associated elements. size: 602.264404 MiB 00:05:13.769 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:13.769 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.769 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:13.769 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.769 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:13.769 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2267076_0 00:05:13.769 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:13.769 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2267076_0 00:05:13.769 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:13.769 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2267076_0 00:05:13.769 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:13.769 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.769 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:13.769 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.770 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:13.770 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2267076 00:05:13.770 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:13.770 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2267076 00:05:13.770 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:13.770 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2267076 00:05:13.770 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:13.770 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.770 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:13.770 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.770 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:13.770 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.770 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:13.770 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.770 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:13.770 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2267076 00:05:13.770 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:13.770 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2267076 00:05:13.770 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:13.770 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2267076 00:05:13.770 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:13.770 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2267076 00:05:13.770 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:13.770 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2267076 00:05:13.770 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:13.770 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.770 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:13.770 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.770 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:13.770 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.770 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:13.770 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2267076 00:05:13.770 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:13.770 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.770 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:13.770 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.770 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:13.770 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2267076 00:05:13.770 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:13.770 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.770 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:13.770 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2267076 00:05:13.770 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:13.770 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2267076 00:05:13.770 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:13.770 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.770 15:47:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.770 15:47:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2267076 00:05:13.770 15:47:53 -- common/autotest_common.sh@936 -- # '[' -z 2267076 ']' 00:05:13.770 15:47:53 -- common/autotest_common.sh@940 -- # kill -0 2267076 00:05:13.770 15:47:53 -- common/autotest_common.sh@941 -- # uname 00:05:13.770 15:47:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.770 15:47:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2267076 00:05:13.770 15:47:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:13.770 15:47:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:13.770 15:47:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2267076' 00:05:13.770 killing process with pid 2267076 00:05:13.770 15:47:53 -- common/autotest_common.sh@955 -- # kill 2267076 00:05:13.770 15:47:53 -- common/autotest_common.sh@960 -- # wait 2267076 00:05:16.300 00:05:16.300 real 0m3.917s 00:05:16.300 user 0m3.885s 00:05:16.300 sys 0m0.482s 00:05:16.300 15:47:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.300 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.300 ************************************ 00:05:16.300 END TEST dpdk_mem_utility 00:05:16.300 ************************************ 00:05:16.300 15:47:55 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:16.300 15:47:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.300 15:47:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.300 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.300 ************************************ 00:05:16.300 START TEST event 00:05:16.300 ************************************ 00:05:16.300 15:47:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:16.300 * Looking for test storage... 00:05:16.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:16.558 15:47:55 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:16.558 15:47:55 -- bdev/nbd_common.sh@6 -- # set -e 00:05:16.558 15:47:55 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.558 15:47:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:16.558 15:47:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.558 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.558 ************************************ 00:05:16.558 START TEST event_perf 00:05:16.558 ************************************ 00:05:16.558 15:47:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.558 Running I/O for 1 seconds...[2024-04-26 15:47:56.166129] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:16.558 [2024-04-26 15:47:56.166203] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2267839 ] 00:05:16.558 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.816 [2024-04-26 15:47:56.270309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.074 [2024-04-26 15:47:56.500750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.074 [2024-04-26 15:47:56.500824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.074 [2024-04-26 15:47:56.500884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.074 [2024-04-26 15:47:56.500890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.448 Running I/O for 1 seconds... 00:05:18.448 lcore 0: 201832 00:05:18.448 lcore 1: 201830 00:05:18.448 lcore 2: 201830 00:05:18.448 lcore 3: 201830 00:05:18.448 done. 00:05:18.448 00:05:18.448 real 0m1.786s 00:05:18.448 user 0m4.644s 00:05:18.448 sys 0m0.136s 00:05:18.448 15:47:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.448 15:47:57 -- common/autotest_common.sh@10 -- # set +x 00:05:18.448 ************************************ 00:05:18.448 END TEST event_perf 00:05:18.448 ************************************ 00:05:18.448 15:47:57 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:18.448 15:47:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:18.448 15:47:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.448 15:47:57 -- common/autotest_common.sh@10 -- # set +x 00:05:18.448 ************************************ 00:05:18.448 START TEST event_reactor 00:05:18.448 ************************************ 00:05:18.448 15:47:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:18.707 [2024-04-26 15:47:58.132597] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:18.707 [2024-04-26 15:47:58.132678] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268105 ] 00:05:18.707 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.707 [2024-04-26 15:47:58.239476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.965 [2024-04-26 15:47:58.459265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.339 test_start 00:05:20.339 oneshot 00:05:20.339 tick 100 00:05:20.339 tick 100 00:05:20.339 tick 250 00:05:20.339 tick 100 00:05:20.339 tick 100 00:05:20.339 tick 100 00:05:20.339 tick 250 00:05:20.339 tick 500 00:05:20.339 tick 100 00:05:20.339 tick 100 00:05:20.339 tick 250 00:05:20.339 tick 100 00:05:20.339 tick 100 00:05:20.339 test_end 00:05:20.339 00:05:20.339 real 0m1.773s 00:05:20.339 user 0m1.621s 00:05:20.339 sys 0m0.144s 00:05:20.339 15:47:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.339 15:47:59 -- common/autotest_common.sh@10 -- # set +x 00:05:20.339 ************************************ 00:05:20.339 END TEST event_reactor 00:05:20.339 ************************************ 00:05:20.339 15:47:59 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.339 15:47:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:20.339 15:47:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.339 15:47:59 -- common/autotest_common.sh@10 -- # set +x 00:05:20.339 ************************************ 00:05:20.339 START TEST event_reactor_perf 00:05:20.339 ************************************ 00:05:20.598 15:48:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.598 [2024-04-26 15:48:00.058872] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:20.598 [2024-04-26 15:48:00.058954] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268591 ] 00:05:20.598 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.598 [2024-04-26 15:48:00.163190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.856 [2024-04-26 15:48:00.385878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.230 test_start 00:05:22.230 test_end 00:05:22.230 Performance: 369775 events per second 00:05:22.230 00:05:22.230 real 0m1.772s 00:05:22.230 user 0m1.631s 00:05:22.230 sys 0m0.132s 00:05:22.230 15:48:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.230 15:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:22.230 ************************************ 00:05:22.230 END TEST event_reactor_perf 00:05:22.230 ************************************ 00:05:22.230 15:48:01 -- event/event.sh@49 -- # uname -s 00:05:22.230 15:48:01 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:22.230 15:48:01 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.230 15:48:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.230 15:48:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.230 15:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:22.487 ************************************ 00:05:22.487 START TEST event_scheduler 00:05:22.487 ************************************ 00:05:22.487 15:48:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.487 * Looking for test storage... 00:05:22.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:22.487 15:48:02 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.487 15:48:02 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2268993 00:05:22.487 15:48:02 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.487 15:48:02 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.487 15:48:02 -- scheduler/scheduler.sh@37 -- # waitforlisten 2268993 00:05:22.488 15:48:02 -- common/autotest_common.sh@817 -- # '[' -z 2268993 ']' 00:05:22.488 15:48:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.488 15:48:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:22.488 15:48:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.488 15:48:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:22.488 15:48:02 -- common/autotest_common.sh@10 -- # set +x 00:05:22.488 [2024-04-26 15:48:02.123695] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:22.488 [2024-04-26 15:48:02.123804] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2268993 ] 00:05:22.746 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.746 [2024-04-26 15:48:02.224376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.003 [2024-04-26 15:48:02.460414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.003 [2024-04-26 15:48:02.460489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.003 [2024-04-26 15:48:02.460589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.003 [2024-04-26 15:48:02.460598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.260 15:48:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:23.260 15:48:02 -- common/autotest_common.sh@850 -- # return 0 00:05:23.260 15:48:02 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.260 15:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.260 15:48:02 -- common/autotest_common.sh@10 -- # set +x 00:05:23.260 POWER: Env isn't set yet! 00:05:23.260 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:23.260 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.261 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.261 POWER: Attempting to initialise PSTAT power management... 00:05:23.261 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:23.261 POWER: Initialized successfully for lcore 0 power management 00:05:23.519 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:23.519 POWER: Initialized successfully for lcore 1 power management 00:05:23.519 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:23.519 POWER: Initialized successfully for lcore 2 power management 00:05:23.519 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:23.519 POWER: Initialized successfully for lcore 3 power management 00:05:23.519 15:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.519 15:48:02 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.519 15:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.519 15:48:02 -- common/autotest_common.sh@10 -- # set +x 00:05:23.780 [2024-04-26 15:48:03.329701] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.780 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.780 15:48:03 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.780 15:48:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.780 15:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.780 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:23.780 ************************************ 00:05:23.780 START TEST scheduler_create_thread 00:05:23.780 ************************************ 00:05:23.780 15:48:03 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:23.780 15:48:03 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 2 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 3 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 4 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 5 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 6 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 7 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 8 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 9 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 10 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:24.038 15:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.038 15:48:03 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:24.038 15:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:24.038 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.407 15:48:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:25.407 15:48:05 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:25.407 15:48:05 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:25.407 15:48:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:25.407 15:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:26.850 15:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:26.850 00:05:26.850 real 0m2.625s 00:05:26.850 user 0m0.024s 00:05:26.850 sys 0m0.004s 00:05:26.850 15:48:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.850 15:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:26.850 ************************************ 00:05:26.850 END TEST scheduler_create_thread 00:05:26.850 ************************************ 00:05:26.850 15:48:06 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.850 15:48:06 -- scheduler/scheduler.sh@46 -- # killprocess 2268993 00:05:26.850 15:48:06 -- common/autotest_common.sh@936 -- # '[' -z 2268993 ']' 00:05:26.850 15:48:06 -- common/autotest_common.sh@940 -- # kill -0 2268993 00:05:26.850 15:48:06 -- common/autotest_common.sh@941 -- # uname 00:05:26.850 15:48:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.850 15:48:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2268993 00:05:26.850 15:48:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:26.850 15:48:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:26.850 15:48:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2268993' 00:05:26.851 killing process with pid 2268993 00:05:26.851 15:48:06 -- common/autotest_common.sh@955 -- # kill 2268993 00:05:26.851 15:48:06 -- common/autotest_common.sh@960 -- # wait 2268993 00:05:27.109 [2024-04-26 15:48:06.567092] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.041 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:28.041 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:28.041 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:28.041 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:28.041 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:28.041 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:28.041 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:28.041 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:28.350 00:05:28.350 real 0m5.897s 00:05:28.350 user 0m9.166s 00:05:28.350 sys 0m0.515s 00:05:28.350 15:48:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.350 15:48:07 -- common/autotest_common.sh@10 -- # set +x 00:05:28.350 ************************************ 00:05:28.350 END TEST event_scheduler 00:05:28.350 ************************************ 00:05:28.350 15:48:07 -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.350 15:48:07 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.350 15:48:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.350 15:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.350 15:48:07 -- common/autotest_common.sh@10 -- # set +x 00:05:28.350 ************************************ 00:05:28.350 START TEST app_repeat 00:05:28.350 ************************************ 00:05:28.350 15:48:08 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:28.350 15:48:08 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.350 15:48:08 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.350 15:48:08 -- event/event.sh@13 -- # local nbd_list 00:05:28.350 15:48:08 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.350 15:48:08 -- event/event.sh@14 -- # local bdev_list 00:05:28.350 15:48:08 -- event/event.sh@15 -- # local repeat_times=4 00:05:28.350 15:48:08 -- event/event.sh@17 -- # modprobe nbd 00:05:28.607 15:48:08 -- event/event.sh@19 -- # repeat_pid=2270105 00:05:28.607 15:48:08 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.607 15:48:08 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.607 15:48:08 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2270105' 00:05:28.607 Process app_repeat pid: 2270105 00:05:28.607 15:48:08 -- event/event.sh@23 -- # for i in {0..2} 00:05:28.607 15:48:08 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.607 spdk_app_start Round 0 00:05:28.607 15:48:08 -- event/event.sh@25 -- # waitforlisten 2270105 /var/tmp/spdk-nbd.sock 00:05:28.607 15:48:08 -- common/autotest_common.sh@817 -- # '[' -z 2270105 ']' 00:05:28.607 15:48:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.607 15:48:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.607 15:48:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.607 15:48:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.607 15:48:08 -- common/autotest_common.sh@10 -- # set +x 00:05:28.607 [2024-04-26 15:48:08.079259] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:28.607 [2024-04-26 15:48:08.079355] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270105 ] 00:05:28.607 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.607 [2024-04-26 15:48:08.183678] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.864 [2024-04-26 15:48:08.417958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.864 [2024-04-26 15:48:08.417965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.428 15:48:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.428 15:48:08 -- common/autotest_common.sh@850 -- # return 0 00:05:29.428 15:48:08 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.685 Malloc0 00:05:29.685 15:48:09 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.942 Malloc1 00:05:29.942 15:48:09 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@12 -- # local i 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.942 /dev/nbd0 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.942 15:48:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:29.942 15:48:09 -- common/autotest_common.sh@855 -- # local i 00:05:29.942 15:48:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:29.942 15:48:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:29.942 15:48:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:29.942 15:48:09 -- common/autotest_common.sh@859 -- # break 00:05:29.942 15:48:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:29.942 15:48:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:29.942 15:48:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.942 1+0 records in 00:05:29.942 1+0 records out 00:05:29.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189712 s, 21.6 MB/s 00:05:29.942 15:48:09 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.942 15:48:09 -- common/autotest_common.sh@872 -- # size=4096 00:05:29.942 15:48:09 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.942 15:48:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:29.942 15:48:09 -- common/autotest_common.sh@875 -- # return 0 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.942 15:48:09 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.199 /dev/nbd1 00:05:30.199 15:48:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.199 15:48:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.199 15:48:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:30.199 15:48:09 -- common/autotest_common.sh@855 -- # local i 00:05:30.199 15:48:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:30.199 15:48:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:30.199 15:48:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:30.199 15:48:09 -- common/autotest_common.sh@859 -- # break 00:05:30.199 15:48:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:30.199 15:48:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:30.199 15:48:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.199 1+0 records in 00:05:30.199 1+0 records out 00:05:30.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398655 s, 10.3 MB/s 00:05:30.199 15:48:09 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.199 15:48:09 -- common/autotest_common.sh@872 -- # size=4096 00:05:30.199 15:48:09 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.199 15:48:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:30.199 15:48:09 -- common/autotest_common.sh@875 -- # return 0 00:05:30.199 15:48:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.199 15:48:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.199 15:48:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.199 15:48:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.199 15:48:09 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.458 { 00:05:30.458 "nbd_device": "/dev/nbd0", 00:05:30.458 "bdev_name": "Malloc0" 00:05:30.458 }, 00:05:30.458 { 00:05:30.458 "nbd_device": "/dev/nbd1", 00:05:30.458 "bdev_name": "Malloc1" 00:05:30.458 } 00:05:30.458 ]' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.458 { 00:05:30.458 "nbd_device": "/dev/nbd0", 00:05:30.458 "bdev_name": "Malloc0" 00:05:30.458 }, 00:05:30.458 { 00:05:30.458 "nbd_device": "/dev/nbd1", 00:05:30.458 "bdev_name": "Malloc1" 00:05:30.458 } 00:05:30.458 ]' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.458 /dev/nbd1' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.458 /dev/nbd1' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.458 256+0 records in 00:05:30.458 256+0 records out 00:05:30.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103469 s, 101 MB/s 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.458 256+0 records in 00:05:30.458 256+0 records out 00:05:30.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165495 s, 63.4 MB/s 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.458 256+0 records in 00:05:30.458 256+0 records out 00:05:30.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195355 s, 53.7 MB/s 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@51 -- # local i 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.458 15:48:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.714 15:48:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@41 -- # break 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.715 15:48:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@41 -- # break 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.971 15:48:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@65 -- # true 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.228 15:48:10 -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.228 15:48:10 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.486 15:48:11 -- event/event.sh@35 -- # sleep 3 00:05:32.859 [2024-04-26 15:48:12.471718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.117 [2024-04-26 15:48:12.685328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.117 [2024-04-26 15:48:12.685331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.375 [2024-04-26 15:48:12.926479] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.375 [2024-04-26 15:48:12.926528] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.745 15:48:14 -- event/event.sh@23 -- # for i in {0..2} 00:05:34.745 15:48:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.745 spdk_app_start Round 1 00:05:34.745 15:48:14 -- event/event.sh@25 -- # waitforlisten 2270105 /var/tmp/spdk-nbd.sock 00:05:34.745 15:48:14 -- common/autotest_common.sh@817 -- # '[' -z 2270105 ']' 00:05:34.745 15:48:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.745 15:48:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.745 15:48:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.745 15:48:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.745 15:48:14 -- common/autotest_common.sh@10 -- # set +x 00:05:34.745 15:48:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:34.745 15:48:14 -- common/autotest_common.sh@850 -- # return 0 00:05:34.745 15:48:14 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.002 Malloc0 00:05:35.003 15:48:14 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.260 Malloc1 00:05:35.260 15:48:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@12 -- # local i 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.260 /dev/nbd0 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.260 15:48:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.260 15:48:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:35.260 15:48:14 -- common/autotest_common.sh@855 -- # local i 00:05:35.260 15:48:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:35.260 15:48:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:35.260 15:48:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:35.260 15:48:14 -- common/autotest_common.sh@859 -- # break 00:05:35.260 15:48:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:35.260 15:48:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:35.260 15:48:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.517 1+0 records in 00:05:35.517 1+0 records out 00:05:35.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204165 s, 20.1 MB/s 00:05:35.517 15:48:14 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.517 15:48:14 -- common/autotest_common.sh@872 -- # size=4096 00:05:35.517 15:48:14 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.517 15:48:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:35.517 15:48:14 -- common/autotest_common.sh@875 -- # return 0 00:05:35.517 15:48:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.517 15:48:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.517 15:48:14 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.517 /dev/nbd1 00:05:35.517 15:48:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.517 15:48:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.517 15:48:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:35.517 15:48:15 -- common/autotest_common.sh@855 -- # local i 00:05:35.517 15:48:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:35.517 15:48:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:35.517 15:48:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:35.517 15:48:15 -- common/autotest_common.sh@859 -- # break 00:05:35.517 15:48:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:35.517 15:48:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:35.517 15:48:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.517 1+0 records in 00:05:35.517 1+0 records out 00:05:35.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175626 s, 23.3 MB/s 00:05:35.517 15:48:15 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.517 15:48:15 -- common/autotest_common.sh@872 -- # size=4096 00:05:35.517 15:48:15 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.517 15:48:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:35.517 15:48:15 -- common/autotest_common.sh@875 -- # return 0 00:05:35.517 15:48:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.517 15:48:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.517 15:48:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.517 15:48:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.517 15:48:15 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.775 { 00:05:35.775 "nbd_device": "/dev/nbd0", 00:05:35.775 "bdev_name": "Malloc0" 00:05:35.775 }, 00:05:35.775 { 00:05:35.775 "nbd_device": "/dev/nbd1", 00:05:35.775 "bdev_name": "Malloc1" 00:05:35.775 } 00:05:35.775 ]' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.775 { 00:05:35.775 "nbd_device": "/dev/nbd0", 00:05:35.775 "bdev_name": "Malloc0" 00:05:35.775 }, 00:05:35.775 { 00:05:35.775 "nbd_device": "/dev/nbd1", 00:05:35.775 "bdev_name": "Malloc1" 00:05:35.775 } 00:05:35.775 ]' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.775 /dev/nbd1' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.775 /dev/nbd1' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.775 256+0 records in 00:05:35.775 256+0 records out 00:05:35.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103098 s, 102 MB/s 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.775 256+0 records in 00:05:35.775 256+0 records out 00:05:35.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167554 s, 62.6 MB/s 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.775 256+0 records in 00:05:35.775 256+0 records out 00:05:35.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0189764 s, 55.3 MB/s 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.775 15:48:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.776 15:48:15 -- bdev/nbd_common.sh@51 -- # local i 00:05:35.776 15:48:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.776 15:48:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@41 -- # break 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.033 15:48:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.289 15:48:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.289 15:48:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.289 15:48:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.289 15:48:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.290 15:48:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.290 15:48:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.290 15:48:15 -- bdev/nbd_common.sh@41 -- # break 00:05:36.290 15:48:15 -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.290 15:48:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.290 15:48:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.290 15:48:15 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.546 15:48:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.546 15:48:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.546 15:48:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.546 15:48:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.546 15:48:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.546 15:48:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.547 15:48:16 -- bdev/nbd_common.sh@65 -- # true 00:05:36.547 15:48:16 -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.547 15:48:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.547 15:48:16 -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.547 15:48:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.547 15:48:16 -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.547 15:48:16 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.804 15:48:16 -- event/event.sh@35 -- # sleep 3 00:05:38.174 [2024-04-26 15:48:17.798471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.431 [2024-04-26 15:48:18.009079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.431 [2024-04-26 15:48:18.009083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.689 [2024-04-26 15:48:18.247013] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.689 [2024-04-26 15:48:18.247078] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.060 15:48:19 -- event/event.sh@23 -- # for i in {0..2} 00:05:40.060 15:48:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.060 spdk_app_start Round 2 00:05:40.060 15:48:19 -- event/event.sh@25 -- # waitforlisten 2270105 /var/tmp/spdk-nbd.sock 00:05:40.060 15:48:19 -- common/autotest_common.sh@817 -- # '[' -z 2270105 ']' 00:05:40.060 15:48:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.060 15:48:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.060 15:48:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.060 15:48:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.060 15:48:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.060 15:48:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.060 15:48:19 -- common/autotest_common.sh@850 -- # return 0 00:05:40.060 15:48:19 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.317 Malloc0 00:05:40.317 15:48:19 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.573 Malloc1 00:05:40.573 15:48:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.573 15:48:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.573 15:48:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.573 15:48:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@12 -- # local i 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.574 /dev/nbd0 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.574 15:48:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.574 15:48:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:40.574 15:48:20 -- common/autotest_common.sh@855 -- # local i 00:05:40.574 15:48:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:40.574 15:48:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:40.574 15:48:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:40.574 15:48:20 -- common/autotest_common.sh@859 -- # break 00:05:40.574 15:48:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.574 15:48:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.574 15:48:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.574 1+0 records in 00:05:40.574 1+0 records out 00:05:40.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186525 s, 22.0 MB/s 00:05:40.574 15:48:20 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.831 15:48:20 -- common/autotest_common.sh@872 -- # size=4096 00:05:40.831 15:48:20 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.831 15:48:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:40.831 15:48:20 -- common/autotest_common.sh@875 -- # return 0 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.831 /dev/nbd1 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.831 15:48:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:40.831 15:48:20 -- common/autotest_common.sh@855 -- # local i 00:05:40.831 15:48:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:40.831 15:48:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:40.831 15:48:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:40.831 15:48:20 -- common/autotest_common.sh@859 -- # break 00:05:40.831 15:48:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.831 15:48:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.831 15:48:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.831 1+0 records in 00:05:40.831 1+0 records out 00:05:40.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190581 s, 21.5 MB/s 00:05:40.831 15:48:20 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.831 15:48:20 -- common/autotest_common.sh@872 -- # size=4096 00:05:40.831 15:48:20 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.831 15:48:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:40.831 15:48:20 -- common/autotest_common.sh@875 -- # return 0 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.831 15:48:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.088 15:48:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.088 { 00:05:41.088 "nbd_device": "/dev/nbd0", 00:05:41.088 "bdev_name": "Malloc0" 00:05:41.088 }, 00:05:41.088 { 00:05:41.088 "nbd_device": "/dev/nbd1", 00:05:41.088 "bdev_name": "Malloc1" 00:05:41.088 } 00:05:41.088 ]' 00:05:41.088 15:48:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.088 { 00:05:41.088 "nbd_device": "/dev/nbd0", 00:05:41.088 "bdev_name": "Malloc0" 00:05:41.088 }, 00:05:41.088 { 00:05:41.088 "nbd_device": "/dev/nbd1", 00:05:41.088 "bdev_name": "Malloc1" 00:05:41.088 } 00:05:41.088 ]' 00:05:41.088 15:48:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.089 /dev/nbd1' 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.089 /dev/nbd1' 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.089 256+0 records in 00:05:41.089 256+0 records out 00:05:41.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00866124 s, 121 MB/s 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.089 256+0 records in 00:05:41.089 256+0 records out 00:05:41.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161301 s, 65.0 MB/s 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.089 256+0 records in 00:05:41.089 256+0 records out 00:05:41.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196068 s, 53.5 MB/s 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@51 -- # local i 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.089 15:48:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@41 -- # break 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.347 15:48:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@41 -- # break 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.604 15:48:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@65 -- # true 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.862 15:48:21 -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.862 15:48:21 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.119 15:48:21 -- event/event.sh@35 -- # sleep 3 00:05:43.489 [2024-04-26 15:48:23.098586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.748 [2024-04-26 15:48:23.312989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.748 [2024-04-26 15:48:23.312991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.006 [2024-04-26 15:48:23.554066] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.006 [2024-04-26 15:48:23.554133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.378 15:48:24 -- event/event.sh@38 -- # waitforlisten 2270105 /var/tmp/spdk-nbd.sock 00:05:45.378 15:48:24 -- common/autotest_common.sh@817 -- # '[' -z 2270105 ']' 00:05:45.378 15:48:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.378 15:48:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.378 15:48:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.378 15:48:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.378 15:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:45.378 15:48:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.378 15:48:24 -- common/autotest_common.sh@850 -- # return 0 00:05:45.378 15:48:24 -- event/event.sh@39 -- # killprocess 2270105 00:05:45.378 15:48:24 -- common/autotest_common.sh@936 -- # '[' -z 2270105 ']' 00:05:45.378 15:48:24 -- common/autotest_common.sh@940 -- # kill -0 2270105 00:05:45.378 15:48:24 -- common/autotest_common.sh@941 -- # uname 00:05:45.378 15:48:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.378 15:48:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2270105 00:05:45.378 15:48:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.378 15:48:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.378 15:48:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2270105' 00:05:45.378 killing process with pid 2270105 00:05:45.378 15:48:24 -- common/autotest_common.sh@955 -- # kill 2270105 00:05:45.378 15:48:24 -- common/autotest_common.sh@960 -- # wait 2270105 00:05:46.751 spdk_app_start is called in Round 0. 00:05:46.751 Shutdown signal received, stop current app iteration 00:05:46.751 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:05:46.751 spdk_app_start is called in Round 1. 00:05:46.751 Shutdown signal received, stop current app iteration 00:05:46.751 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:05:46.751 spdk_app_start is called in Round 2. 00:05:46.751 Shutdown signal received, stop current app iteration 00:05:46.751 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:05:46.751 spdk_app_start is called in Round 3. 00:05:46.751 Shutdown signal received, stop current app iteration 00:05:46.751 15:48:26 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.751 15:48:26 -- event/event.sh@42 -- # return 0 00:05:46.751 00:05:46.751 real 0m18.124s 00:05:46.751 user 0m36.803s 00:05:46.751 sys 0m2.346s 00:05:46.751 15:48:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.751 15:48:26 -- common/autotest_common.sh@10 -- # set +x 00:05:46.751 ************************************ 00:05:46.751 END TEST app_repeat 00:05:46.751 ************************************ 00:05:46.751 15:48:26 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.751 15:48:26 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.751 15:48:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.751 15:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.751 15:48:26 -- common/autotest_common.sh@10 -- # set +x 00:05:46.751 ************************************ 00:05:46.751 START TEST cpu_locks 00:05:46.751 ************************************ 00:05:46.751 15:48:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.751 * Looking for test storage... 00:05:46.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.751 15:48:26 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.751 15:48:26 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.751 15:48:26 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.751 15:48:26 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.751 15:48:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.751 15:48:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.751 15:48:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.009 ************************************ 00:05:47.009 START TEST default_locks 00:05:47.009 ************************************ 00:05:47.009 15:48:26 -- common/autotest_common.sh@1111 -- # default_locks 00:05:47.009 15:48:26 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2273838 00:05:47.009 15:48:26 -- event/cpu_locks.sh@47 -- # waitforlisten 2273838 00:05:47.009 15:48:26 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.009 15:48:26 -- common/autotest_common.sh@817 -- # '[' -z 2273838 ']' 00:05:47.009 15:48:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.009 15:48:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:47.009 15:48:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.009 15:48:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:47.009 15:48:26 -- common/autotest_common.sh@10 -- # set +x 00:05:47.009 [2024-04-26 15:48:26.608540] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:47.009 [2024-04-26 15:48:26.608620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273838 ] 00:05:47.009 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.267 [2024-04-26 15:48:26.713920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.267 [2024-04-26 15:48:26.925861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.205 15:48:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:48.205 15:48:27 -- common/autotest_common.sh@850 -- # return 0 00:05:48.205 15:48:27 -- event/cpu_locks.sh@49 -- # locks_exist 2273838 00:05:48.205 15:48:27 -- event/cpu_locks.sh@22 -- # lslocks -p 2273838 00:05:48.205 15:48:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.770 lslocks: write error 00:05:48.770 15:48:28 -- event/cpu_locks.sh@50 -- # killprocess 2273838 00:05:48.770 15:48:28 -- common/autotest_common.sh@936 -- # '[' -z 2273838 ']' 00:05:48.770 15:48:28 -- common/autotest_common.sh@940 -- # kill -0 2273838 00:05:48.770 15:48:28 -- common/autotest_common.sh@941 -- # uname 00:05:48.770 15:48:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.770 15:48:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2273838 00:05:49.028 15:48:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.028 15:48:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.028 15:48:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2273838' 00:05:49.028 killing process with pid 2273838 00:05:49.028 15:48:28 -- common/autotest_common.sh@955 -- # kill 2273838 00:05:49.028 15:48:28 -- common/autotest_common.sh@960 -- # wait 2273838 00:05:51.557 15:48:30 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2273838 00:05:51.557 15:48:30 -- common/autotest_common.sh@638 -- # local es=0 00:05:51.557 15:48:30 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2273838 00:05:51.557 15:48:30 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:51.557 15:48:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:51.557 15:48:30 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:51.557 15:48:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:51.557 15:48:30 -- common/autotest_common.sh@641 -- # waitforlisten 2273838 00:05:51.557 15:48:30 -- common/autotest_common.sh@817 -- # '[' -z 2273838 ']' 00:05:51.557 15:48:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.557 15:48:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.557 15:48:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.557 15:48:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.557 15:48:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2273838) - No such process 00:05:51.557 ERROR: process (pid: 2273838) is no longer running 00:05:51.557 15:48:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.557 15:48:30 -- common/autotest_common.sh@850 -- # return 1 00:05:51.557 15:48:30 -- common/autotest_common.sh@641 -- # es=1 00:05:51.557 15:48:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:51.557 15:48:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:51.557 15:48:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:51.557 15:48:30 -- event/cpu_locks.sh@54 -- # no_locks 00:05:51.557 15:48:30 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.557 15:48:30 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.557 15:48:30 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.557 00:05:51.557 real 0m4.345s 00:05:51.557 user 0m4.299s 00:05:51.557 sys 0m0.704s 00:05:51.557 15:48:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.557 15:48:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.557 ************************************ 00:05:51.557 END TEST default_locks 00:05:51.557 ************************************ 00:05:51.557 15:48:30 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:51.557 15:48:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.557 15:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.557 15:48:30 -- common/autotest_common.sh@10 -- # set +x 00:05:51.557 ************************************ 00:05:51.557 START TEST default_locks_via_rpc 00:05:51.557 ************************************ 00:05:51.557 15:48:31 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:51.557 15:48:31 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2274574 00:05:51.557 15:48:31 -- event/cpu_locks.sh@63 -- # waitforlisten 2274574 00:05:51.557 15:48:31 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.557 15:48:31 -- common/autotest_common.sh@817 -- # '[' -z 2274574 ']' 00:05:51.557 15:48:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.557 15:48:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.557 15:48:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.557 15:48:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.557 15:48:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.557 [2024-04-26 15:48:31.127686] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:51.557 [2024-04-26 15:48:31.127768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274574 ] 00:05:51.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.557 [2024-04-26 15:48:31.233464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.815 [2024-04-26 15:48:31.458970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.749 15:48:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.749 15:48:32 -- common/autotest_common.sh@850 -- # return 0 00:05:52.749 15:48:32 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:52.749 15:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.749 15:48:32 -- common/autotest_common.sh@10 -- # set +x 00:05:52.749 15:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.749 15:48:32 -- event/cpu_locks.sh@67 -- # no_locks 00:05:52.749 15:48:32 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.749 15:48:32 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.749 15:48:32 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.749 15:48:32 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.749 15:48:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.749 15:48:32 -- common/autotest_common.sh@10 -- # set +x 00:05:52.749 15:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.749 15:48:32 -- event/cpu_locks.sh@71 -- # locks_exist 2274574 00:05:52.749 15:48:32 -- event/cpu_locks.sh@22 -- # lslocks -p 2274574 00:05:52.749 15:48:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.314 15:48:32 -- event/cpu_locks.sh@73 -- # killprocess 2274574 00:05:53.314 15:48:32 -- common/autotest_common.sh@936 -- # '[' -z 2274574 ']' 00:05:53.314 15:48:32 -- common/autotest_common.sh@940 -- # kill -0 2274574 00:05:53.314 15:48:32 -- common/autotest_common.sh@941 -- # uname 00:05:53.314 15:48:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.314 15:48:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2274574 00:05:53.314 15:48:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.314 15:48:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.314 15:48:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2274574' 00:05:53.314 killing process with pid 2274574 00:05:53.314 15:48:32 -- common/autotest_common.sh@955 -- # kill 2274574 00:05:53.314 15:48:32 -- common/autotest_common.sh@960 -- # wait 2274574 00:05:55.845 00:05:55.845 real 0m4.151s 00:05:55.845 user 0m4.103s 00:05:55.845 sys 0m0.645s 00:05:55.845 15:48:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.845 15:48:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.845 ************************************ 00:05:55.845 END TEST default_locks_via_rpc 00:05:55.845 ************************************ 00:05:55.845 15:48:35 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:55.845 15:48:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.845 15:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.845 15:48:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.845 ************************************ 00:05:55.845 START TEST non_locking_app_on_locked_coremask 00:05:55.845 ************************************ 00:05:55.845 15:48:35 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:55.845 15:48:35 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.845 15:48:35 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2275307 00:05:55.845 15:48:35 -- event/cpu_locks.sh@81 -- # waitforlisten 2275307 /var/tmp/spdk.sock 00:05:55.845 15:48:35 -- common/autotest_common.sh@817 -- # '[' -z 2275307 ']' 00:05:55.845 15:48:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.845 15:48:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.845 15:48:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.845 15:48:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.845 15:48:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.845 [2024-04-26 15:48:35.396720] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:55.845 [2024-04-26 15:48:35.396803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275307 ] 00:05:55.845 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.845 [2024-04-26 15:48:35.500892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.104 [2024-04-26 15:48:35.720090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.084 15:48:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.085 15:48:36 -- common/autotest_common.sh@850 -- # return 0 00:05:57.085 15:48:36 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2275534 00:05:57.085 15:48:36 -- event/cpu_locks.sh@85 -- # waitforlisten 2275534 /var/tmp/spdk2.sock 00:05:57.085 15:48:36 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.085 15:48:36 -- common/autotest_common.sh@817 -- # '[' -z 2275534 ']' 00:05:57.085 15:48:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.085 15:48:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.085 15:48:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.085 15:48:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.085 15:48:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.085 [2024-04-26 15:48:36.702270] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:05:57.085 [2024-04-26 15:48:36.702387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275534 ] 00:05:57.399 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.399 [2024-04-26 15:48:36.846590] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.399 [2024-04-26 15:48:36.846636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.656 [2024-04-26 15:48:37.292846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.556 15:48:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.556 15:48:39 -- common/autotest_common.sh@850 -- # return 0 00:05:59.556 15:48:39 -- event/cpu_locks.sh@87 -- # locks_exist 2275307 00:05:59.556 15:48:39 -- event/cpu_locks.sh@22 -- # lslocks -p 2275307 00:05:59.556 15:48:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.122 lslocks: write error 00:06:00.122 15:48:39 -- event/cpu_locks.sh@89 -- # killprocess 2275307 00:06:00.122 15:48:39 -- common/autotest_common.sh@936 -- # '[' -z 2275307 ']' 00:06:00.122 15:48:39 -- common/autotest_common.sh@940 -- # kill -0 2275307 00:06:00.122 15:48:39 -- common/autotest_common.sh@941 -- # uname 00:06:00.122 15:48:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:00.122 15:48:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2275307 00:06:00.122 15:48:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:00.122 15:48:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:00.122 15:48:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2275307' 00:06:00.122 killing process with pid 2275307 00:06:00.122 15:48:39 -- common/autotest_common.sh@955 -- # kill 2275307 00:06:00.122 15:48:39 -- common/autotest_common.sh@960 -- # wait 2275307 00:06:05.386 15:48:44 -- event/cpu_locks.sh@90 -- # killprocess 2275534 00:06:05.386 15:48:44 -- common/autotest_common.sh@936 -- # '[' -z 2275534 ']' 00:06:05.386 15:48:44 -- common/autotest_common.sh@940 -- # kill -0 2275534 00:06:05.386 15:48:44 -- common/autotest_common.sh@941 -- # uname 00:06:05.386 15:48:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.386 15:48:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2275534 00:06:05.386 15:48:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.386 15:48:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.386 15:48:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2275534' 00:06:05.386 killing process with pid 2275534 00:06:05.386 15:48:44 -- common/autotest_common.sh@955 -- # kill 2275534 00:06:05.386 15:48:44 -- common/autotest_common.sh@960 -- # wait 2275534 00:06:07.290 00:06:07.290 real 0m11.431s 00:06:07.290 user 0m11.654s 00:06:07.290 sys 0m1.102s 00:06:07.290 15:48:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.290 15:48:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.290 ************************************ 00:06:07.290 END TEST non_locking_app_on_locked_coremask 00:06:07.290 ************************************ 00:06:07.290 15:48:46 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:07.290 15:48:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.290 15:48:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.290 15:48:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.290 ************************************ 00:06:07.290 START TEST locking_app_on_unlocked_coremask 00:06:07.290 ************************************ 00:06:07.290 15:48:46 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:07.290 15:48:46 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2277413 00:06:07.290 15:48:46 -- event/cpu_locks.sh@99 -- # waitforlisten 2277413 /var/tmp/spdk.sock 00:06:07.290 15:48:46 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:07.290 15:48:46 -- common/autotest_common.sh@817 -- # '[' -z 2277413 ']' 00:06:07.290 15:48:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.290 15:48:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.291 15:48:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.291 15:48:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.291 15:48:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.550 [2024-04-26 15:48:47.024571] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:07.550 [2024-04-26 15:48:47.024660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277413 ] 00:06:07.550 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.550 [2024-04-26 15:48:47.128049] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.550 [2024-04-26 15:48:47.128132] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.808 [2024-04-26 15:48:47.339802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.744 15:48:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.744 15:48:48 -- common/autotest_common.sh@850 -- # return 0 00:06:08.744 15:48:48 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2277646 00:06:08.744 15:48:48 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.744 15:48:48 -- event/cpu_locks.sh@103 -- # waitforlisten 2277646 /var/tmp/spdk2.sock 00:06:08.744 15:48:48 -- common/autotest_common.sh@817 -- # '[' -z 2277646 ']' 00:06:08.744 15:48:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.744 15:48:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.744 15:48:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.744 15:48:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.744 15:48:48 -- common/autotest_common.sh@10 -- # set +x 00:06:08.744 [2024-04-26 15:48:48.341088] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:08.744 [2024-04-26 15:48:48.341179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277646 ] 00:06:08.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.002 [2024-04-26 15:48:48.484250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.260 [2024-04-26 15:48:48.909960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.158 15:48:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.158 15:48:50 -- common/autotest_common.sh@850 -- # return 0 00:06:11.158 15:48:50 -- event/cpu_locks.sh@105 -- # locks_exist 2277646 00:06:11.158 15:48:50 -- event/cpu_locks.sh@22 -- # lslocks -p 2277646 00:06:11.158 15:48:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.091 lslocks: write error 00:06:12.091 15:48:51 -- event/cpu_locks.sh@107 -- # killprocess 2277413 00:06:12.091 15:48:51 -- common/autotest_common.sh@936 -- # '[' -z 2277413 ']' 00:06:12.091 15:48:51 -- common/autotest_common.sh@940 -- # kill -0 2277413 00:06:12.091 15:48:51 -- common/autotest_common.sh@941 -- # uname 00:06:12.091 15:48:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.091 15:48:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2277413 00:06:12.091 15:48:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.091 15:48:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.091 15:48:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2277413' 00:06:12.091 killing process with pid 2277413 00:06:12.091 15:48:51 -- common/autotest_common.sh@955 -- # kill 2277413 00:06:12.091 15:48:51 -- common/autotest_common.sh@960 -- # wait 2277413 00:06:17.354 15:48:56 -- event/cpu_locks.sh@108 -- # killprocess 2277646 00:06:17.354 15:48:56 -- common/autotest_common.sh@936 -- # '[' -z 2277646 ']' 00:06:17.354 15:48:56 -- common/autotest_common.sh@940 -- # kill -0 2277646 00:06:17.354 15:48:56 -- common/autotest_common.sh@941 -- # uname 00:06:17.354 15:48:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.354 15:48:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2277646 00:06:17.354 15:48:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:17.354 15:48:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:17.354 15:48:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2277646' 00:06:17.354 killing process with pid 2277646 00:06:17.354 15:48:56 -- common/autotest_common.sh@955 -- # kill 2277646 00:06:17.354 15:48:56 -- common/autotest_common.sh@960 -- # wait 2277646 00:06:19.253 00:06:19.253 real 0m11.920s 00:06:19.253 user 0m12.106s 00:06:19.253 sys 0m1.324s 00:06:19.253 15:48:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.253 15:48:58 -- common/autotest_common.sh@10 -- # set +x 00:06:19.253 ************************************ 00:06:19.253 END TEST locking_app_on_unlocked_coremask 00:06:19.253 ************************************ 00:06:19.253 15:48:58 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.253 15:48:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.253 15:48:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.253 15:48:58 -- common/autotest_common.sh@10 -- # set +x 00:06:19.512 ************************************ 00:06:19.512 START TEST locking_app_on_locked_coremask 00:06:19.512 ************************************ 00:06:19.512 15:48:59 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:19.512 15:48:59 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2279485 00:06:19.512 15:48:59 -- event/cpu_locks.sh@116 -- # waitforlisten 2279485 /var/tmp/spdk.sock 00:06:19.512 15:48:59 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.512 15:48:59 -- common/autotest_common.sh@817 -- # '[' -z 2279485 ']' 00:06:19.512 15:48:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.512 15:48:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.512 15:48:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.512 15:48:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.512 15:48:59 -- common/autotest_common.sh@10 -- # set +x 00:06:19.512 [2024-04-26 15:48:59.107138] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:19.512 [2024-04-26 15:48:59.107245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279485 ] 00:06:19.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.770 [2024-04-26 15:48:59.212703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.770 [2024-04-26 15:48:59.423718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.703 15:49:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:20.703 15:49:00 -- common/autotest_common.sh@850 -- # return 0 00:06:20.703 15:49:00 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2279695 00:06:20.703 15:49:00 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2279695 /var/tmp/spdk2.sock 00:06:20.703 15:49:00 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.703 15:49:00 -- common/autotest_common.sh@638 -- # local es=0 00:06:20.703 15:49:00 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2279695 /var/tmp/spdk2.sock 00:06:20.703 15:49:00 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:20.703 15:49:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.703 15:49:00 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:20.703 15:49:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.703 15:49:00 -- common/autotest_common.sh@641 -- # waitforlisten 2279695 /var/tmp/spdk2.sock 00:06:20.703 15:49:00 -- common/autotest_common.sh@817 -- # '[' -z 2279695 ']' 00:06:20.703 15:49:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.703 15:49:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.703 15:49:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.703 15:49:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.703 15:49:00 -- common/autotest_common.sh@10 -- # set +x 00:06:20.961 [2024-04-26 15:49:00.432517] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:20.961 [2024-04-26 15:49:00.432607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279695 ] 00:06:20.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.961 [2024-04-26 15:49:00.575279] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2279485 has claimed it. 00:06:20.961 [2024-04-26 15:49:00.575331] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2279695) - No such process 00:06:21.526 ERROR: process (pid: 2279695) is no longer running 00:06:21.526 15:49:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.526 15:49:01 -- common/autotest_common.sh@850 -- # return 1 00:06:21.526 15:49:01 -- common/autotest_common.sh@641 -- # es=1 00:06:21.526 15:49:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:21.526 15:49:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:21.526 15:49:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:21.526 15:49:01 -- event/cpu_locks.sh@122 -- # locks_exist 2279485 00:06:21.526 15:49:01 -- event/cpu_locks.sh@22 -- # lslocks -p 2279485 00:06:21.526 15:49:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.784 lslocks: write error 00:06:21.784 15:49:01 -- event/cpu_locks.sh@124 -- # killprocess 2279485 00:06:21.784 15:49:01 -- common/autotest_common.sh@936 -- # '[' -z 2279485 ']' 00:06:21.784 15:49:01 -- common/autotest_common.sh@940 -- # kill -0 2279485 00:06:21.784 15:49:01 -- common/autotest_common.sh@941 -- # uname 00:06:21.784 15:49:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.784 15:49:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2279485 00:06:21.784 15:49:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.784 15:49:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.784 15:49:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2279485' 00:06:21.784 killing process with pid 2279485 00:06:21.784 15:49:01 -- common/autotest_common.sh@955 -- # kill 2279485 00:06:21.785 15:49:01 -- common/autotest_common.sh@960 -- # wait 2279485 00:06:24.316 00:06:24.316 real 0m4.604s 00:06:24.316 user 0m4.679s 00:06:24.316 sys 0m0.749s 00:06:24.316 15:49:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.316 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.316 ************************************ 00:06:24.316 END TEST locking_app_on_locked_coremask 00:06:24.316 ************************************ 00:06:24.316 15:49:03 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:24.316 15:49:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.316 15:49:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.316 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.316 ************************************ 00:06:24.316 START TEST locking_overlapped_coremask 00:06:24.316 ************************************ 00:06:24.316 15:49:03 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:24.316 15:49:03 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2280263 00:06:24.316 15:49:03 -- event/cpu_locks.sh@133 -- # waitforlisten 2280263 /var/tmp/spdk.sock 00:06:24.316 15:49:03 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:24.316 15:49:03 -- common/autotest_common.sh@817 -- # '[' -z 2280263 ']' 00:06:24.316 15:49:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.316 15:49:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.316 15:49:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.316 15:49:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.316 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.316 [2024-04-26 15:49:03.850787] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:24.317 [2024-04-26 15:49:03.850880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2280263 ] 00:06:24.317 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.317 [2024-04-26 15:49:03.955720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.575 [2024-04-26 15:49:04.176575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.575 [2024-04-26 15:49:04.176660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.575 [2024-04-26 15:49:04.176669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.515 15:49:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:25.515 15:49:05 -- common/autotest_common.sh@850 -- # return 0 00:06:25.515 15:49:05 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2280497 00:06:25.515 15:49:05 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2280497 /var/tmp/spdk2.sock 00:06:25.515 15:49:05 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:25.515 15:49:05 -- common/autotest_common.sh@638 -- # local es=0 00:06:25.515 15:49:05 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2280497 /var/tmp/spdk2.sock 00:06:25.515 15:49:05 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:25.515 15:49:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:25.515 15:49:05 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:25.515 15:49:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:25.515 15:49:05 -- common/autotest_common.sh@641 -- # waitforlisten 2280497 /var/tmp/spdk2.sock 00:06:25.515 15:49:05 -- common/autotest_common.sh@817 -- # '[' -z 2280497 ']' 00:06:25.515 15:49:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.515 15:49:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:25.515 15:49:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.515 15:49:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:25.515 15:49:05 -- common/autotest_common.sh@10 -- # set +x 00:06:25.774 [2024-04-26 15:49:05.217698] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:25.774 [2024-04-26 15:49:05.217784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2280497 ] 00:06:25.774 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.774 [2024-04-26 15:49:05.359798] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2280263 has claimed it. 00:06:25.774 [2024-04-26 15:49:05.359856] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2280497) - No such process 00:06:26.342 ERROR: process (pid: 2280497) is no longer running 00:06:26.342 15:49:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.342 15:49:05 -- common/autotest_common.sh@850 -- # return 1 00:06:26.342 15:49:05 -- common/autotest_common.sh@641 -- # es=1 00:06:26.342 15:49:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:26.342 15:49:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:26.342 15:49:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:26.342 15:49:05 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:26.342 15:49:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.342 15:49:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.342 15:49:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.342 15:49:05 -- event/cpu_locks.sh@141 -- # killprocess 2280263 00:06:26.342 15:49:05 -- common/autotest_common.sh@936 -- # '[' -z 2280263 ']' 00:06:26.342 15:49:05 -- common/autotest_common.sh@940 -- # kill -0 2280263 00:06:26.342 15:49:05 -- common/autotest_common.sh@941 -- # uname 00:06:26.342 15:49:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.342 15:49:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2280263 00:06:26.342 15:49:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.342 15:49:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.342 15:49:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2280263' 00:06:26.342 killing process with pid 2280263 00:06:26.342 15:49:05 -- common/autotest_common.sh@955 -- # kill 2280263 00:06:26.342 15:49:05 -- common/autotest_common.sh@960 -- # wait 2280263 00:06:28.881 00:06:28.881 real 0m4.545s 00:06:28.881 user 0m12.007s 00:06:28.881 sys 0m0.587s 00:06:28.881 15:49:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.881 15:49:08 -- common/autotest_common.sh@10 -- # set +x 00:06:28.881 ************************************ 00:06:28.881 END TEST locking_overlapped_coremask 00:06:28.881 ************************************ 00:06:28.881 15:49:08 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.881 15:49:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.881 15:49:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.881 15:49:08 -- common/autotest_common.sh@10 -- # set +x 00:06:28.881 ************************************ 00:06:28.881 START TEST locking_overlapped_coremask_via_rpc 00:06:28.881 ************************************ 00:06:28.881 15:49:08 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:28.881 15:49:08 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2281061 00:06:28.881 15:49:08 -- event/cpu_locks.sh@149 -- # waitforlisten 2281061 /var/tmp/spdk.sock 00:06:28.881 15:49:08 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.881 15:49:08 -- common/autotest_common.sh@817 -- # '[' -z 2281061 ']' 00:06:28.881 15:49:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.881 15:49:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:28.881 15:49:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.881 15:49:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:28.881 15:49:08 -- common/autotest_common.sh@10 -- # set +x 00:06:28.881 [2024-04-26 15:49:08.533488] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:28.881 [2024-04-26 15:49:08.533575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281061 ] 00:06:29.140 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.140 [2024-04-26 15:49:08.638554] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.140 [2024-04-26 15:49:08.638599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.400 [2024-04-26 15:49:08.857423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.400 [2024-04-26 15:49:08.857492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.400 [2024-04-26 15:49:08.857497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.337 15:49:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.338 15:49:09 -- common/autotest_common.sh@850 -- # return 0 00:06:30.338 15:49:09 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2281288 00:06:30.338 15:49:09 -- event/cpu_locks.sh@153 -- # waitforlisten 2281288 /var/tmp/spdk2.sock 00:06:30.338 15:49:09 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.338 15:49:09 -- common/autotest_common.sh@817 -- # '[' -z 2281288 ']' 00:06:30.338 15:49:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.338 15:49:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.338 15:49:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.338 15:49:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.338 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:06:30.338 [2024-04-26 15:49:09.904193] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:30.338 [2024-04-26 15:49:09.904279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281288 ] 00:06:30.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.597 [2024-04-26 15:49:10.053679] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.597 [2024-04-26 15:49:10.053731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.856 [2024-04-26 15:49:10.527537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.856 [2024-04-26 15:49:10.531120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.856 [2024-04-26 15:49:10.531144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.755 15:49:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:32.755 15:49:12 -- common/autotest_common.sh@850 -- # return 0 00:06:32.755 15:49:12 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.755 15:49:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:32.755 15:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:32.755 15:49:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:32.755 15:49:12 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.755 15:49:12 -- common/autotest_common.sh@638 -- # local es=0 00:06:32.755 15:49:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.755 15:49:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:32.755 15:49:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.755 15:49:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:32.755 15:49:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.755 15:49:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.755 15:49:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:32.755 15:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:32.755 [2024-04-26 15:49:12.432194] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2281061 has claimed it. 00:06:33.013 request: 00:06:33.013 { 00:06:33.013 "method": "framework_enable_cpumask_locks", 00:06:33.013 "req_id": 1 00:06:33.013 } 00:06:33.013 Got JSON-RPC error response 00:06:33.013 response: 00:06:33.013 { 00:06:33.013 "code": -32603, 00:06:33.013 "message": "Failed to claim CPU core: 2" 00:06:33.013 } 00:06:33.013 15:49:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:33.013 15:49:12 -- common/autotest_common.sh@641 -- # es=1 00:06:33.013 15:49:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:33.013 15:49:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:33.013 15:49:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:33.013 15:49:12 -- event/cpu_locks.sh@158 -- # waitforlisten 2281061 /var/tmp/spdk.sock 00:06:33.013 15:49:12 -- common/autotest_common.sh@817 -- # '[' -z 2281061 ']' 00:06:33.013 15:49:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.013 15:49:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:33.013 15:49:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.013 15:49:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:33.013 15:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:33.013 15:49:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:33.013 15:49:12 -- common/autotest_common.sh@850 -- # return 0 00:06:33.013 15:49:12 -- event/cpu_locks.sh@159 -- # waitforlisten 2281288 /var/tmp/spdk2.sock 00:06:33.013 15:49:12 -- common/autotest_common.sh@817 -- # '[' -z 2281288 ']' 00:06:33.013 15:49:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.013 15:49:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:33.013 15:49:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.013 15:49:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:33.013 15:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:33.272 15:49:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:33.272 15:49:12 -- common/autotest_common.sh@850 -- # return 0 00:06:33.272 15:49:12 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:33.272 15:49:12 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.272 15:49:12 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.272 15:49:12 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.272 00:06:33.272 real 0m4.375s 00:06:33.272 user 0m1.034s 00:06:33.272 sys 0m0.190s 00:06:33.272 15:49:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.272 15:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:33.272 ************************************ 00:06:33.272 END TEST locking_overlapped_coremask_via_rpc 00:06:33.272 ************************************ 00:06:33.272 15:49:12 -- event/cpu_locks.sh@174 -- # cleanup 00:06:33.272 15:49:12 -- event/cpu_locks.sh@15 -- # [[ -z 2281061 ]] 00:06:33.272 15:49:12 -- event/cpu_locks.sh@15 -- # killprocess 2281061 00:06:33.272 15:49:12 -- common/autotest_common.sh@936 -- # '[' -z 2281061 ']' 00:06:33.272 15:49:12 -- common/autotest_common.sh@940 -- # kill -0 2281061 00:06:33.272 15:49:12 -- common/autotest_common.sh@941 -- # uname 00:06:33.272 15:49:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.272 15:49:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2281061 00:06:33.272 15:49:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.272 15:49:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.272 15:49:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2281061' 00:06:33.272 killing process with pid 2281061 00:06:33.272 15:49:12 -- common/autotest_common.sh@955 -- # kill 2281061 00:06:33.272 15:49:12 -- common/autotest_common.sh@960 -- # wait 2281061 00:06:35.885 15:49:15 -- event/cpu_locks.sh@16 -- # [[ -z 2281288 ]] 00:06:35.885 15:49:15 -- event/cpu_locks.sh@16 -- # killprocess 2281288 00:06:35.885 15:49:15 -- common/autotest_common.sh@936 -- # '[' -z 2281288 ']' 00:06:35.885 15:49:15 -- common/autotest_common.sh@940 -- # kill -0 2281288 00:06:35.885 15:49:15 -- common/autotest_common.sh@941 -- # uname 00:06:35.885 15:49:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.885 15:49:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2281288 00:06:35.885 15:49:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:35.885 15:49:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:35.885 15:49:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2281288' 00:06:35.885 killing process with pid 2281288 00:06:35.885 15:49:15 -- common/autotest_common.sh@955 -- # kill 2281288 00:06:35.885 15:49:15 -- common/autotest_common.sh@960 -- # wait 2281288 00:06:38.421 15:49:17 -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.421 15:49:17 -- event/cpu_locks.sh@1 -- # cleanup 00:06:38.421 15:49:17 -- event/cpu_locks.sh@15 -- # [[ -z 2281061 ]] 00:06:38.421 15:49:17 -- event/cpu_locks.sh@15 -- # killprocess 2281061 00:06:38.421 15:49:17 -- common/autotest_common.sh@936 -- # '[' -z 2281061 ']' 00:06:38.421 15:49:17 -- common/autotest_common.sh@940 -- # kill -0 2281061 00:06:38.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2281061) - No such process 00:06:38.421 15:49:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2281061 is not found' 00:06:38.421 Process with pid 2281061 is not found 00:06:38.421 15:49:17 -- event/cpu_locks.sh@16 -- # [[ -z 2281288 ]] 00:06:38.421 15:49:17 -- event/cpu_locks.sh@16 -- # killprocess 2281288 00:06:38.421 15:49:17 -- common/autotest_common.sh@936 -- # '[' -z 2281288 ']' 00:06:38.421 15:49:17 -- common/autotest_common.sh@940 -- # kill -0 2281288 00:06:38.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2281288) - No such process 00:06:38.421 15:49:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2281288 is not found' 00:06:38.421 Process with pid 2281288 is not found 00:06:38.421 15:49:17 -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.421 00:06:38.421 real 0m51.598s 00:06:38.421 user 1m25.679s 00:06:38.421 sys 0m6.735s 00:06:38.421 15:49:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.421 15:49:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.421 ************************************ 00:06:38.421 END TEST cpu_locks 00:06:38.421 ************************************ 00:06:38.421 00:06:38.421 real 1m22.035s 00:06:38.421 user 2m19.972s 00:06:38.421 sys 0m10.607s 00:06:38.421 15:49:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.421 15:49:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.421 ************************************ 00:06:38.421 END TEST event 00:06:38.421 ************************************ 00:06:38.421 15:49:17 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:38.421 15:49:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.421 15:49:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.421 15:49:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.681 ************************************ 00:06:38.681 START TEST thread 00:06:38.681 ************************************ 00:06:38.681 15:49:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:38.681 * Looking for test storage... 00:06:38.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:38.681 15:49:18 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.681 15:49:18 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:38.681 15:49:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.681 15:49:18 -- common/autotest_common.sh@10 -- # set +x 00:06:38.681 ************************************ 00:06:38.681 START TEST thread_poller_perf 00:06:38.681 ************************************ 00:06:38.681 15:49:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.681 [2024-04-26 15:49:18.343254] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:38.681 [2024-04-26 15:49:18.343341] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282897 ] 00:06:38.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.940 [2024-04-26 15:49:18.443656] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.198 [2024-04-26 15:49:18.658710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.198 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:40.574 ====================================== 00:06:40.574 busy:2306113384 (cyc) 00:06:40.574 total_run_count: 400000 00:06:40.574 tsc_hz: 2300000000 (cyc) 00:06:40.574 ====================================== 00:06:40.574 poller_cost: 5765 (cyc), 2506 (nsec) 00:06:40.574 00:06:40.574 real 0m1.750s 00:06:40.574 user 0m1.609s 00:06:40.574 sys 0m0.134s 00:06:40.574 15:49:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.574 15:49:20 -- common/autotest_common.sh@10 -- # set +x 00:06:40.574 ************************************ 00:06:40.574 END TEST thread_poller_perf 00:06:40.574 ************************************ 00:06:40.574 15:49:20 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.574 15:49:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:40.574 15:49:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.574 15:49:20 -- common/autotest_common.sh@10 -- # set +x 00:06:40.574 ************************************ 00:06:40.574 START TEST thread_poller_perf 00:06:40.574 ************************************ 00:06:40.574 15:49:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.833 [2024-04-26 15:49:20.258039] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:40.833 [2024-04-26 15:49:20.258143] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283215 ] 00:06:40.833 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.833 [2024-04-26 15:49:20.362848] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.094 [2024-04-26 15:49:20.585573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.094 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:42.470 ====================================== 00:06:42.470 busy:2302748960 (cyc) 00:06:42.470 total_run_count: 5163000 00:06:42.470 tsc_hz: 2300000000 (cyc) 00:06:42.470 ====================================== 00:06:42.470 poller_cost: 446 (cyc), 193 (nsec) 00:06:42.470 00:06:42.470 real 0m1.771s 00:06:42.470 user 0m1.628s 00:06:42.470 sys 0m0.136s 00:06:42.470 15:49:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.470 15:49:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.470 ************************************ 00:06:42.470 END TEST thread_poller_perf 00:06:42.470 ************************************ 00:06:42.471 15:49:22 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.471 00:06:42.471 real 0m3.909s 00:06:42.471 user 0m3.378s 00:06:42.471 sys 0m0.505s 00:06:42.471 15:49:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.471 15:49:22 -- common/autotest_common.sh@10 -- # set +x 00:06:42.471 ************************************ 00:06:42.471 END TEST thread 00:06:42.471 ************************************ 00:06:42.471 15:49:22 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.471 15:49:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.471 15:49:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.471 15:49:22 -- common/autotest_common.sh@10 -- # set +x 00:06:42.729 ************************************ 00:06:42.729 START TEST accel 00:06:42.729 ************************************ 00:06:42.729 15:49:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.729 * Looking for test storage... 00:06:42.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:42.730 15:49:22 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:42.730 15:49:22 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:42.730 15:49:22 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.730 15:49:22 -- accel/accel.sh@62 -- # spdk_tgt_pid=2283563 00:06:42.730 15:49:22 -- accel/accel.sh@63 -- # waitforlisten 2283563 00:06:42.730 15:49:22 -- common/autotest_common.sh@817 -- # '[' -z 2283563 ']' 00:06:42.730 15:49:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.730 15:49:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:42.730 15:49:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.730 15:49:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:42.730 15:49:22 -- common/autotest_common.sh@10 -- # set +x 00:06:42.730 15:49:22 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:42.730 15:49:22 -- accel/accel.sh@61 -- # build_accel_config 00:06:42.730 15:49:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.730 15:49:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.730 15:49:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.730 15:49:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.730 15:49:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.730 15:49:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.730 15:49:22 -- accel/accel.sh@41 -- # jq -r . 00:06:42.730 [2024-04-26 15:49:22.359758] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:42.730 [2024-04-26 15:49:22.359850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283563 ] 00:06:42.988 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.988 [2024-04-26 15:49:22.463286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.248 [2024-04-26 15:49:22.683502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.186 15:49:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:44.186 15:49:23 -- common/autotest_common.sh@850 -- # return 0 00:06:44.186 15:49:23 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:44.186 15:49:23 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:44.186 15:49:23 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:44.186 15:49:23 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:44.186 15:49:23 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:44.186 15:49:23 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:44.186 15:49:23 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:44.186 15:49:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.186 15:49:23 -- common/autotest_common.sh@10 -- # set +x 00:06:44.186 15:49:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # IFS== 00:06:44.186 15:49:23 -- accel/accel.sh@72 -- # read -r opc module 00:06:44.186 15:49:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:44.186 15:49:23 -- accel/accel.sh@75 -- # killprocess 2283563 00:06:44.186 15:49:23 -- common/autotest_common.sh@936 -- # '[' -z 2283563 ']' 00:06:44.186 15:49:23 -- common/autotest_common.sh@940 -- # kill -0 2283563 00:06:44.186 15:49:23 -- common/autotest_common.sh@941 -- # uname 00:06:44.186 15:49:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.186 15:49:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2283563 00:06:44.186 15:49:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.186 15:49:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.186 15:49:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2283563' 00:06:44.186 killing process with pid 2283563 00:06:44.186 15:49:23 -- common/autotest_common.sh@955 -- # kill 2283563 00:06:44.186 15:49:23 -- common/autotest_common.sh@960 -- # wait 2283563 00:06:46.720 15:49:25 -- accel/accel.sh@76 -- # trap - ERR 00:06:46.720 15:49:25 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:46.720 15:49:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:46.720 15:49:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.720 15:49:25 -- common/autotest_common.sh@10 -- # set +x 00:06:46.720 15:49:26 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:46.720 15:49:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:46.720 15:49:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.720 15:49:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.720 15:49:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.720 15:49:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.720 15:49:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.720 15:49:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.720 15:49:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.720 15:49:26 -- accel/accel.sh@41 -- # jq -r . 00:06:46.720 15:49:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:46.720 15:49:26 -- common/autotest_common.sh@10 -- # set +x 00:06:46.720 15:49:26 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:46.720 15:49:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:46.720 15:49:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.720 15:49:26 -- common/autotest_common.sh@10 -- # set +x 00:06:46.720 ************************************ 00:06:46.720 START TEST accel_missing_filename 00:06:46.720 ************************************ 00:06:46.720 15:49:26 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:46.720 15:49:26 -- common/autotest_common.sh@638 -- # local es=0 00:06:46.720 15:49:26 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:46.720 15:49:26 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:46.720 15:49:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:46.720 15:49:26 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:46.720 15:49:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:46.720 15:49:26 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:46.720 15:49:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:46.720 15:49:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.720 15:49:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.720 15:49:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.720 15:49:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.720 15:49:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.720 15:49:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.720 15:49:26 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.720 15:49:26 -- accel/accel.sh@41 -- # jq -r . 00:06:46.979 [2024-04-26 15:49:26.434862] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:46.979 [2024-04-26 15:49:26.434947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284297 ] 00:06:46.979 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.979 [2024-04-26 15:49:26.542195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.239 [2024-04-26 15:49:26.764878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.498 [2024-04-26 15:49:26.997308] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.065 [2024-04-26 15:49:27.518414] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:48.324 A filename is required. 00:06:48.324 15:49:27 -- common/autotest_common.sh@641 -- # es=234 00:06:48.324 15:49:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:48.324 15:49:27 -- common/autotest_common.sh@650 -- # es=106 00:06:48.324 15:49:27 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:48.324 15:49:27 -- common/autotest_common.sh@658 -- # es=1 00:06:48.324 15:49:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:48.324 00:06:48.324 real 0m1.540s 00:06:48.324 user 0m1.380s 00:06:48.324 sys 0m0.194s 00:06:48.324 15:49:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.324 15:49:27 -- common/autotest_common.sh@10 -- # set +x 00:06:48.324 ************************************ 00:06:48.324 END TEST accel_missing_filename 00:06:48.324 ************************************ 00:06:48.324 15:49:27 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.324 15:49:27 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:48.324 15:49:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.324 15:49:27 -- common/autotest_common.sh@10 -- # set +x 00:06:48.584 ************************************ 00:06:48.584 START TEST accel_compress_verify 00:06:48.584 ************************************ 00:06:48.584 15:49:28 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.584 15:49:28 -- common/autotest_common.sh@638 -- # local es=0 00:06:48.584 15:49:28 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.584 15:49:28 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:48.584 15:49:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:48.584 15:49:28 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:48.584 15:49:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:48.584 15:49:28 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.584 15:49:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.584 15:49:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.584 15:49:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.584 15:49:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.584 15:49:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.584 15:49:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.584 15:49:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.584 15:49:28 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.584 15:49:28 -- accel/accel.sh@41 -- # jq -r . 00:06:48.584 [2024-04-26 15:49:28.128713] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:48.584 [2024-04-26 15:49:28.128786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284738 ] 00:06:48.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.584 [2024-04-26 15:49:28.229676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.843 [2024-04-26 15:49:28.452566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.103 [2024-04-26 15:49:28.691057] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.670 [2024-04-26 15:49:29.229027] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:50.238 00:06:50.238 Compression does not support the verify option, aborting. 00:06:50.238 15:49:29 -- common/autotest_common.sh@641 -- # es=161 00:06:50.238 15:49:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:50.238 15:49:29 -- common/autotest_common.sh@650 -- # es=33 00:06:50.238 15:49:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:50.238 15:49:29 -- common/autotest_common.sh@658 -- # es=1 00:06:50.238 15:49:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:50.238 00:06:50.238 real 0m1.558s 00:06:50.238 user 0m1.401s 00:06:50.238 sys 0m0.187s 00:06:50.238 15:49:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.238 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:06:50.238 ************************************ 00:06:50.238 END TEST accel_compress_verify 00:06:50.238 ************************************ 00:06:50.238 15:49:29 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:50.238 15:49:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.238 15:49:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.238 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:06:50.238 ************************************ 00:06:50.238 START TEST accel_wrong_workload 00:06:50.238 ************************************ 00:06:50.238 15:49:29 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:50.238 15:49:29 -- common/autotest_common.sh@638 -- # local es=0 00:06:50.238 15:49:29 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:50.238 15:49:29 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:50.238 15:49:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:50.238 15:49:29 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:50.238 15:49:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:50.238 15:49:29 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:50.238 15:49:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:50.238 15:49:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.238 15:49:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.238 15:49:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.238 15:49:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.238 15:49:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.238 15:49:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.238 15:49:29 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.238 15:49:29 -- accel/accel.sh@41 -- # jq -r . 00:06:50.238 Unsupported workload type: foobar 00:06:50.238 [2024-04-26 15:49:29.839339] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:50.238 accel_perf options: 00:06:50.238 [-h help message] 00:06:50.238 [-q queue depth per core] 00:06:50.238 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.238 [-T number of threads per core 00:06:50.238 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.238 [-t time in seconds] 00:06:50.238 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.238 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.238 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.238 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.238 [-S for crc32c workload, use this seed value (default 0) 00:06:50.238 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.238 [-f for fill workload, use this BYTE value (default 255) 00:06:50.238 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.238 [-y verify result if this switch is on] 00:06:50.238 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.238 Can be used to spread operations across a wider range of memory. 00:06:50.238 15:49:29 -- common/autotest_common.sh@641 -- # es=1 00:06:50.238 15:49:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:50.238 15:49:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:50.238 15:49:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:50.238 00:06:50.238 real 0m0.067s 00:06:50.238 user 0m0.077s 00:06:50.238 sys 0m0.032s 00:06:50.238 15:49:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.238 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:06:50.238 ************************************ 00:06:50.238 END TEST accel_wrong_workload 00:06:50.238 ************************************ 00:06:50.238 15:49:29 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.238 15:49:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:50.238 15:49:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.238 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:06:50.498 ************************************ 00:06:50.498 START TEST accel_negative_buffers 00:06:50.498 ************************************ 00:06:50.498 15:49:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.498 15:49:30 -- common/autotest_common.sh@638 -- # local es=0 00:06:50.498 15:49:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:50.498 15:49:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:50.498 15:49:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:50.498 15:49:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:50.498 15:49:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:50.498 15:49:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:50.498 15:49:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:50.498 15:49:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.498 15:49:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.498 15:49:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.498 15:49:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.498 15:49:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.498 15:49:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.498 15:49:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.498 15:49:30 -- accel/accel.sh@41 -- # jq -r . 00:06:50.498 -x option must be non-negative. 00:06:50.498 [2024-04-26 15:49:30.066368] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:50.498 accel_perf options: 00:06:50.498 [-h help message] 00:06:50.498 [-q queue depth per core] 00:06:50.498 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.498 [-T number of threads per core 00:06:50.498 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.498 [-t time in seconds] 00:06:50.498 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.498 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.498 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.498 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.498 [-S for crc32c workload, use this seed value (default 0) 00:06:50.498 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.498 [-f for fill workload, use this BYTE value (default 255) 00:06:50.498 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.498 [-y verify result if this switch is on] 00:06:50.498 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.498 Can be used to spread operations across a wider range of memory. 00:06:50.498 15:49:30 -- common/autotest_common.sh@641 -- # es=1 00:06:50.498 15:49:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:50.498 15:49:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:50.498 15:49:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:50.498 00:06:50.498 real 0m0.071s 00:06:50.498 user 0m0.070s 00:06:50.498 sys 0m0.042s 00:06:50.498 15:49:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.498 15:49:30 -- common/autotest_common.sh@10 -- # set +x 00:06:50.498 ************************************ 00:06:50.498 END TEST accel_negative_buffers 00:06:50.498 ************************************ 00:06:50.498 15:49:30 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:50.498 15:49:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:50.498 15:49:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.498 15:49:30 -- common/autotest_common.sh@10 -- # set +x 00:06:50.757 ************************************ 00:06:50.757 START TEST accel_crc32c 00:06:50.757 ************************************ 00:06:50.757 15:49:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:50.757 15:49:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.757 15:49:30 -- accel/accel.sh@17 -- # local accel_module 00:06:50.757 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:50.757 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:50.757 15:49:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:50.757 15:49:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:50.757 15:49:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.757 15:49:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.757 15:49:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.757 15:49:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.757 15:49:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.757 15:49:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.758 15:49:30 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.758 15:49:30 -- accel/accel.sh@41 -- # jq -r . 00:06:50.758 [2024-04-26 15:49:30.311975] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:50.758 [2024-04-26 15:49:30.312092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285054 ] 00:06:50.758 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.758 [2024-04-26 15:49:30.419027] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.016 [2024-04-26 15:49:30.650743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val= 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val= 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val=0x1 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val= 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val= 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val=crc32c 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val=32 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val= 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val=software 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@22 -- # accel_module=software 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val=32 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val=32 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val=1 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.275 15:49:30 -- accel/accel.sh@20 -- # val=Yes 00:06:51.275 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.275 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.276 15:49:30 -- accel/accel.sh@20 -- # val= 00:06:51.276 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.276 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.276 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:51.276 15:49:30 -- accel/accel.sh@20 -- # val= 00:06:51.276 15:49:30 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.276 15:49:30 -- accel/accel.sh@19 -- # IFS=: 00:06:51.276 15:49:30 -- accel/accel.sh@19 -- # read -r var val 00:06:53.180 15:49:32 -- accel/accel.sh@20 -- # val= 00:06:53.180 15:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:53.180 15:49:32 -- accel/accel.sh@20 -- # val= 00:06:53.180 15:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:53.180 15:49:32 -- accel/accel.sh@20 -- # val= 00:06:53.180 15:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:53.180 15:49:32 -- accel/accel.sh@20 -- # val= 00:06:53.180 15:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:53.180 15:49:32 -- accel/accel.sh@20 -- # val= 00:06:53.180 15:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:53.180 15:49:32 -- accel/accel.sh@20 -- # val= 00:06:53.180 15:49:32 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # IFS=: 00:06:53.180 15:49:32 -- accel/accel.sh@19 -- # read -r var val 00:06:53.180 15:49:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.180 15:49:32 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:53.180 15:49:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.180 00:06:53.180 real 0m2.583s 00:06:53.180 user 0m2.378s 00:06:53.180 sys 0m0.206s 00:06:53.180 15:49:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.180 15:49:32 -- common/autotest_common.sh@10 -- # set +x 00:06:53.180 ************************************ 00:06:53.180 END TEST accel_crc32c 00:06:53.180 ************************************ 00:06:53.439 15:49:32 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:53.439 15:49:32 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:53.439 15:49:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.439 15:49:32 -- common/autotest_common.sh@10 -- # set +x 00:06:53.439 ************************************ 00:06:53.439 START TEST accel_crc32c_C2 00:06:53.439 ************************************ 00:06:53.439 15:49:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:53.439 15:49:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.439 15:49:33 -- accel/accel.sh@17 -- # local accel_module 00:06:53.439 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:53.439 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:53.439 15:49:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:53.439 15:49:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:53.439 15:49:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.439 15:49:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.439 15:49:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.439 15:49:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.439 15:49:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.439 15:49:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.439 15:49:33 -- accel/accel.sh@40 -- # local IFS=, 00:06:53.439 15:49:33 -- accel/accel.sh@41 -- # jq -r . 00:06:53.439 [2024-04-26 15:49:33.068584] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:53.439 [2024-04-26 15:49:33.068667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2285543 ] 00:06:53.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.697 [2024-04-26 15:49:33.176757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.955 [2024-04-26 15:49:33.406550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val= 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val= 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val=0x1 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val= 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val= 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val=crc32c 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val=0 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val= 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val=software 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@22 -- # accel_module=software 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val=32 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val=32 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val=1 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val=Yes 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val= 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:54.215 15:49:33 -- accel/accel.sh@20 -- # val= 00:06:54.215 15:49:33 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # IFS=: 00:06:54.215 15:49:33 -- accel/accel.sh@19 -- # read -r var val 00:06:56.119 15:49:35 -- accel/accel.sh@20 -- # val= 00:06:56.119 15:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:56.119 15:49:35 -- accel/accel.sh@20 -- # val= 00:06:56.119 15:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:56.119 15:49:35 -- accel/accel.sh@20 -- # val= 00:06:56.119 15:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:56.119 15:49:35 -- accel/accel.sh@20 -- # val= 00:06:56.119 15:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:56.119 15:49:35 -- accel/accel.sh@20 -- # val= 00:06:56.119 15:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:56.119 15:49:35 -- accel/accel.sh@20 -- # val= 00:06:56.119 15:49:35 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:56.119 15:49:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.119 15:49:35 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:56.119 15:49:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.119 00:06:56.119 real 0m2.553s 00:06:56.119 user 0m2.366s 00:06:56.119 sys 0m0.188s 00:06:56.119 15:49:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:56.119 15:49:35 -- common/autotest_common.sh@10 -- # set +x 00:06:56.119 ************************************ 00:06:56.119 END TEST accel_crc32c_C2 00:06:56.119 ************************************ 00:06:56.119 15:49:35 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:56.119 15:49:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:56.119 15:49:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.119 15:49:35 -- common/autotest_common.sh@10 -- # set +x 00:06:56.119 ************************************ 00:06:56.119 START TEST accel_copy 00:06:56.119 ************************************ 00:06:56.119 15:49:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:56.119 15:49:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.119 15:49:35 -- accel/accel.sh@17 -- # local accel_module 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # IFS=: 00:06:56.119 15:49:35 -- accel/accel.sh@19 -- # read -r var val 00:06:56.119 15:49:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:56.119 15:49:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:56.119 15:49:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.119 15:49:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.119 15:49:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.119 15:49:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.119 15:49:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.119 15:49:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.119 15:49:35 -- accel/accel.sh@40 -- # local IFS=, 00:06:56.119 15:49:35 -- accel/accel.sh@41 -- # jq -r . 00:06:56.119 [2024-04-26 15:49:35.778113] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:56.119 [2024-04-26 15:49:35.778185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286032 ] 00:06:56.379 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.379 [2024-04-26 15:49:35.877277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.638 [2024-04-26 15:49:36.093114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val= 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val= 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val=0x1 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val= 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val= 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val=copy 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val= 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val=software 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@22 -- # accel_module=software 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val=32 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val=32 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val=1 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val=Yes 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val= 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:56.897 15:49:36 -- accel/accel.sh@20 -- # val= 00:06:56.897 15:49:36 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # IFS=: 00:06:56.897 15:49:36 -- accel/accel.sh@19 -- # read -r var val 00:06:58.803 15:49:38 -- accel/accel.sh@20 -- # val= 00:06:58.803 15:49:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # IFS=: 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # read -r var val 00:06:58.803 15:49:38 -- accel/accel.sh@20 -- # val= 00:06:58.803 15:49:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # IFS=: 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # read -r var val 00:06:58.803 15:49:38 -- accel/accel.sh@20 -- # val= 00:06:58.803 15:49:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # IFS=: 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # read -r var val 00:06:58.803 15:49:38 -- accel/accel.sh@20 -- # val= 00:06:58.803 15:49:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # IFS=: 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # read -r var val 00:06:58.803 15:49:38 -- accel/accel.sh@20 -- # val= 00:06:58.803 15:49:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # IFS=: 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # read -r var val 00:06:58.803 15:49:38 -- accel/accel.sh@20 -- # val= 00:06:58.803 15:49:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # IFS=: 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # read -r var val 00:06:58.803 15:49:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.803 15:49:38 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:58.803 15:49:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.803 00:06:58.803 real 0m2.524s 00:06:58.803 user 0m2.349s 00:06:58.803 sys 0m0.175s 00:06:58.803 15:49:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.803 15:49:38 -- common/autotest_common.sh@10 -- # set +x 00:06:58.803 ************************************ 00:06:58.803 END TEST accel_copy 00:06:58.803 ************************************ 00:06:58.803 15:49:38 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.803 15:49:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:58.803 15:49:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.803 15:49:38 -- common/autotest_common.sh@10 -- # set +x 00:06:58.803 ************************************ 00:06:58.803 START TEST accel_fill 00:06:58.803 ************************************ 00:06:58.803 15:49:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.803 15:49:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.803 15:49:38 -- accel/accel.sh@17 -- # local accel_module 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # IFS=: 00:06:58.803 15:49:38 -- accel/accel.sh@19 -- # read -r var val 00:06:58.803 15:49:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.803 15:49:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:58.803 15:49:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.803 15:49:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.803 15:49:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.803 15:49:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.803 15:49:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.803 15:49:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.803 15:49:38 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.803 15:49:38 -- accel/accel.sh@41 -- # jq -r . 00:06:58.803 [2024-04-26 15:49:38.453600] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:06:58.803 [2024-04-26 15:49:38.453673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286514 ] 00:06:59.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.062 [2024-04-26 15:49:38.554828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.321 [2024-04-26 15:49:38.765704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val= 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val= 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val=0x1 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val= 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val= 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val=fill 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val=0x80 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val= 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val=software 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@22 -- # accel_module=software 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val=64 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val=64 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val=1 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.580 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.580 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.580 15:49:39 -- accel/accel.sh@20 -- # val=Yes 00:06:59.581 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.581 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.581 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.581 15:49:39 -- accel/accel.sh@20 -- # val= 00:06:59.581 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.581 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.581 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:06:59.581 15:49:39 -- accel/accel.sh@20 -- # val= 00:06:59.581 15:49:39 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.581 15:49:39 -- accel/accel.sh@19 -- # IFS=: 00:06:59.581 15:49:39 -- accel/accel.sh@19 -- # read -r var val 00:07:01.485 15:49:40 -- accel/accel.sh@20 -- # val= 00:07:01.485 15:49:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # IFS=: 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # read -r var val 00:07:01.485 15:49:40 -- accel/accel.sh@20 -- # val= 00:07:01.485 15:49:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # IFS=: 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # read -r var val 00:07:01.485 15:49:40 -- accel/accel.sh@20 -- # val= 00:07:01.485 15:49:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # IFS=: 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # read -r var val 00:07:01.485 15:49:40 -- accel/accel.sh@20 -- # val= 00:07:01.485 15:49:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # IFS=: 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # read -r var val 00:07:01.485 15:49:40 -- accel/accel.sh@20 -- # val= 00:07:01.485 15:49:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # IFS=: 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # read -r var val 00:07:01.485 15:49:40 -- accel/accel.sh@20 -- # val= 00:07:01.485 15:49:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # IFS=: 00:07:01.485 15:49:40 -- accel/accel.sh@19 -- # read -r var val 00:07:01.485 15:49:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.485 15:49:40 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:01.485 15:49:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.485 00:07:01.485 real 0m2.548s 00:07:01.485 user 0m2.363s 00:07:01.485 sys 0m0.185s 00:07:01.485 15:49:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.485 15:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.485 ************************************ 00:07:01.485 END TEST accel_fill 00:07:01.485 ************************************ 00:07:01.485 15:49:40 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:01.485 15:49:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:01.485 15:49:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.485 15:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:01.485 ************************************ 00:07:01.485 START TEST accel_copy_crc32c 00:07:01.485 ************************************ 00:07:01.485 15:49:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:01.485 15:49:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.485 15:49:41 -- accel/accel.sh@17 -- # local accel_module 00:07:01.485 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:01.485 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:01.485 15:49:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:01.485 15:49:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:01.485 15:49:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.485 15:49:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.485 15:49:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.485 15:49:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.485 15:49:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.485 15:49:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.485 15:49:41 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.485 15:49:41 -- accel/accel.sh@41 -- # jq -r . 00:07:01.485 [2024-04-26 15:49:41.158411] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:01.485 [2024-04-26 15:49:41.158498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287000 ] 00:07:01.743 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.743 [2024-04-26 15:49:41.259272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.002 [2024-04-26 15:49:41.473504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val= 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val= 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val=0x1 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val= 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val= 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val=0 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val= 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val=software 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@22 -- # accel_module=software 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val=32 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val=32 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val=1 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val=Yes 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val= 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:02.261 15:49:41 -- accel/accel.sh@20 -- # val= 00:07:02.261 15:49:41 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # IFS=: 00:07:02.261 15:49:41 -- accel/accel.sh@19 -- # read -r var val 00:07:04.161 15:49:43 -- accel/accel.sh@20 -- # val= 00:07:04.161 15:49:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # IFS=: 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # read -r var val 00:07:04.161 15:49:43 -- accel/accel.sh@20 -- # val= 00:07:04.161 15:49:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # IFS=: 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # read -r var val 00:07:04.161 15:49:43 -- accel/accel.sh@20 -- # val= 00:07:04.161 15:49:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # IFS=: 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # read -r var val 00:07:04.161 15:49:43 -- accel/accel.sh@20 -- # val= 00:07:04.161 15:49:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # IFS=: 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # read -r var val 00:07:04.161 15:49:43 -- accel/accel.sh@20 -- # val= 00:07:04.161 15:49:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # IFS=: 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # read -r var val 00:07:04.161 15:49:43 -- accel/accel.sh@20 -- # val= 00:07:04.161 15:49:43 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # IFS=: 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # read -r var val 00:07:04.161 15:49:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.161 15:49:43 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:04.161 15:49:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.161 00:07:04.161 real 0m2.528s 00:07:04.161 user 0m2.350s 00:07:04.161 sys 0m0.179s 00:07:04.161 15:49:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:04.161 15:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.161 ************************************ 00:07:04.161 END TEST accel_copy_crc32c 00:07:04.161 ************************************ 00:07:04.161 15:49:43 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.161 15:49:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:04.161 15:49:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.161 15:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.161 ************************************ 00:07:04.161 START TEST accel_copy_crc32c_C2 00:07:04.161 ************************************ 00:07:04.161 15:49:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:04.161 15:49:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.161 15:49:43 -- accel/accel.sh@17 -- # local accel_module 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # IFS=: 00:07:04.161 15:49:43 -- accel/accel.sh@19 -- # read -r var val 00:07:04.161 15:49:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:04.161 15:49:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:04.161 15:49:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.161 15:49:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.161 15:49:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.161 15:49:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.161 15:49:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.161 15:49:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.161 15:49:43 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.161 15:49:43 -- accel/accel.sh@41 -- # jq -r . 00:07:04.161 [2024-04-26 15:49:43.839000] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:04.161 [2024-04-26 15:49:43.839147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287483 ] 00:07:04.419 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.419 [2024-04-26 15:49:43.940063] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.677 [2024-04-26 15:49:44.153267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val= 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val= 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val=0x1 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val= 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val= 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val=0 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val= 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val=software 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val=32 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val=32 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val=1 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val=Yes 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val= 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:04.935 15:49:44 -- accel/accel.sh@20 -- # val= 00:07:04.935 15:49:44 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # IFS=: 00:07:04.935 15:49:44 -- accel/accel.sh@19 -- # read -r var val 00:07:06.837 15:49:46 -- accel/accel.sh@20 -- # val= 00:07:06.837 15:49:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # IFS=: 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # read -r var val 00:07:06.837 15:49:46 -- accel/accel.sh@20 -- # val= 00:07:06.837 15:49:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # IFS=: 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # read -r var val 00:07:06.837 15:49:46 -- accel/accel.sh@20 -- # val= 00:07:06.837 15:49:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # IFS=: 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # read -r var val 00:07:06.837 15:49:46 -- accel/accel.sh@20 -- # val= 00:07:06.837 15:49:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # IFS=: 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # read -r var val 00:07:06.837 15:49:46 -- accel/accel.sh@20 -- # val= 00:07:06.837 15:49:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # IFS=: 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # read -r var val 00:07:06.837 15:49:46 -- accel/accel.sh@20 -- # val= 00:07:06.837 15:49:46 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # IFS=: 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # read -r var val 00:07:06.837 15:49:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.837 15:49:46 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:06.837 15:49:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.837 00:07:06.837 real 0m2.535s 00:07:06.837 user 0m2.361s 00:07:06.837 sys 0m0.175s 00:07:06.837 15:49:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:06.837 15:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:06.837 ************************************ 00:07:06.837 END TEST accel_copy_crc32c_C2 00:07:06.837 ************************************ 00:07:06.837 15:49:46 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:06.837 15:49:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:06.837 15:49:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.837 15:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:06.837 ************************************ 00:07:06.837 START TEST accel_dualcast 00:07:06.837 ************************************ 00:07:06.837 15:49:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:06.837 15:49:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.837 15:49:46 -- accel/accel.sh@17 -- # local accel_module 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # IFS=: 00:07:06.837 15:49:46 -- accel/accel.sh@19 -- # read -r var val 00:07:06.837 15:49:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:06.837 15:49:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:06.837 15:49:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.837 15:49:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.837 15:49:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.837 15:49:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.837 15:49:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.837 15:49:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.837 15:49:46 -- accel/accel.sh@40 -- # local IFS=, 00:07:06.837 15:49:46 -- accel/accel.sh@41 -- # jq -r . 00:07:07.097 [2024-04-26 15:49:46.531162] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:07.097 [2024-04-26 15:49:46.531239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287969 ] 00:07:07.097 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.097 [2024-04-26 15:49:46.632194] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.405 [2024-04-26 15:49:46.844975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val= 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val= 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val=0x1 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val= 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val= 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val=dualcast 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val= 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val=software 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val=32 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val=32 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val=1 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val=Yes 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val= 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:07.695 15:49:47 -- accel/accel.sh@20 -- # val= 00:07:07.695 15:49:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # IFS=: 00:07:07.695 15:49:47 -- accel/accel.sh@19 -- # read -r var val 00:07:09.600 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:09.600 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:09.600 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:09.600 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:09.600 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:09.600 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:09.600 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:09.600 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:09.600 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:09.600 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:09.600 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:09.600 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:09.600 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:09.600 15:49:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.600 15:49:49 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:09.600 15:49:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.600 00:07:09.600 real 0m2.535s 00:07:09.600 user 0m2.345s 00:07:09.600 sys 0m0.191s 00:07:09.600 15:49:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.600 15:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:09.600 ************************************ 00:07:09.600 END TEST accel_dualcast 00:07:09.600 ************************************ 00:07:09.600 15:49:49 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:09.600 15:49:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:09.600 15:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.600 15:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:09.600 ************************************ 00:07:09.601 START TEST accel_compare 00:07:09.601 ************************************ 00:07:09.601 15:49:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:09.601 15:49:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.601 15:49:49 -- accel/accel.sh@17 -- # local accel_module 00:07:09.601 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:09.601 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:09.601 15:49:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:09.601 15:49:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:09.601 15:49:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.601 15:49:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.601 15:49:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.601 15:49:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.601 15:49:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.601 15:49:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.601 15:49:49 -- accel/accel.sh@40 -- # local IFS=, 00:07:09.601 15:49:49 -- accel/accel.sh@41 -- # jq -r . 00:07:09.601 [2024-04-26 15:49:49.219330] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:09.601 [2024-04-26 15:49:49.219407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288462 ] 00:07:09.601 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.859 [2024-04-26 15:49:49.322151] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.859 [2024-04-26 15:49:49.533060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val=0x1 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val=compare 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val=software 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@22 -- # accel_module=software 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val=32 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val=32 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val=1 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val=Yes 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:10.119 15:49:49 -- accel/accel.sh@20 -- # val= 00:07:10.119 15:49:49 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # IFS=: 00:07:10.119 15:49:49 -- accel/accel.sh@19 -- # read -r var val 00:07:12.027 15:49:51 -- accel/accel.sh@20 -- # val= 00:07:12.027 15:49:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # IFS=: 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # read -r var val 00:07:12.027 15:49:51 -- accel/accel.sh@20 -- # val= 00:07:12.027 15:49:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # IFS=: 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # read -r var val 00:07:12.027 15:49:51 -- accel/accel.sh@20 -- # val= 00:07:12.027 15:49:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # IFS=: 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # read -r var val 00:07:12.027 15:49:51 -- accel/accel.sh@20 -- # val= 00:07:12.027 15:49:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # IFS=: 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # read -r var val 00:07:12.027 15:49:51 -- accel/accel.sh@20 -- # val= 00:07:12.027 15:49:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # IFS=: 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # read -r var val 00:07:12.027 15:49:51 -- accel/accel.sh@20 -- # val= 00:07:12.027 15:49:51 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # IFS=: 00:07:12.027 15:49:51 -- accel/accel.sh@19 -- # read -r var val 00:07:12.286 15:49:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.286 15:49:51 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:12.286 15:49:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.286 00:07:12.286 real 0m2.530s 00:07:12.286 user 0m2.343s 00:07:12.286 sys 0m0.188s 00:07:12.286 15:49:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.286 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.286 ************************************ 00:07:12.286 END TEST accel_compare 00:07:12.286 ************************************ 00:07:12.286 15:49:51 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:12.286 15:49:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:12.286 15:49:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.286 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.286 ************************************ 00:07:12.286 START TEST accel_xor 00:07:12.286 ************************************ 00:07:12.286 15:49:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:12.286 15:49:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.286 15:49:51 -- accel/accel.sh@17 -- # local accel_module 00:07:12.286 15:49:51 -- accel/accel.sh@19 -- # IFS=: 00:07:12.286 15:49:51 -- accel/accel.sh@19 -- # read -r var val 00:07:12.286 15:49:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:12.286 15:49:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:12.286 15:49:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.286 15:49:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.286 15:49:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.286 15:49:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.286 15:49:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.286 15:49:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.286 15:49:51 -- accel/accel.sh@40 -- # local IFS=, 00:07:12.286 15:49:51 -- accel/accel.sh@41 -- # jq -r . 00:07:12.286 [2024-04-26 15:49:51.897878] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:12.286 [2024-04-26 15:49:51.897958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288940 ] 00:07:12.286 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.545 [2024-04-26 15:49:52.003638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.545 [2024-04-26 15:49:52.223893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val= 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val= 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val=0x1 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val= 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val= 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val=xor 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val=2 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val= 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val=software 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@22 -- # accel_module=software 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val=32 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val=32 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val=1 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val=Yes 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val= 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:12.805 15:49:52 -- accel/accel.sh@20 -- # val= 00:07:12.805 15:49:52 -- accel/accel.sh@21 -- # case "$var" in 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # IFS=: 00:07:12.805 15:49:52 -- accel/accel.sh@19 -- # read -r var val 00:07:14.725 15:49:54 -- accel/accel.sh@20 -- # val= 00:07:14.725 15:49:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.725 15:49:54 -- accel/accel.sh@19 -- # IFS=: 00:07:14.725 15:49:54 -- accel/accel.sh@19 -- # read -r var val 00:07:14.725 15:49:54 -- accel/accel.sh@20 -- # val= 00:07:14.725 15:49:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.725 15:49:54 -- accel/accel.sh@19 -- # IFS=: 00:07:14.725 15:49:54 -- accel/accel.sh@19 -- # read -r var val 00:07:14.725 15:49:54 -- accel/accel.sh@20 -- # val= 00:07:14.725 15:49:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.726 15:49:54 -- accel/accel.sh@19 -- # IFS=: 00:07:14.726 15:49:54 -- accel/accel.sh@19 -- # read -r var val 00:07:14.726 15:49:54 -- accel/accel.sh@20 -- # val= 00:07:14.726 15:49:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.726 15:49:54 -- accel/accel.sh@19 -- # IFS=: 00:07:14.726 15:49:54 -- accel/accel.sh@19 -- # read -r var val 00:07:14.726 15:49:54 -- accel/accel.sh@20 -- # val= 00:07:14.726 15:49:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.726 15:49:54 -- accel/accel.sh@19 -- # IFS=: 00:07:14.726 15:49:54 -- accel/accel.sh@19 -- # read -r var val 00:07:14.726 15:49:54 -- accel/accel.sh@20 -- # val= 00:07:14.726 15:49:54 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.726 15:49:54 -- accel/accel.sh@19 -- # IFS=: 00:07:14.726 15:49:54 -- accel/accel.sh@19 -- # read -r var val 00:07:14.985 15:49:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.985 15:49:54 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:14.985 15:49:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.985 00:07:14.985 real 0m2.561s 00:07:14.985 user 0m2.371s 00:07:14.985 sys 0m0.191s 00:07:14.985 15:49:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.985 15:49:54 -- common/autotest_common.sh@10 -- # set +x 00:07:14.985 ************************************ 00:07:14.985 END TEST accel_xor 00:07:14.985 ************************************ 00:07:14.985 15:49:54 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:14.985 15:49:54 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:14.985 15:49:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.985 15:49:54 -- common/autotest_common.sh@10 -- # set +x 00:07:14.985 ************************************ 00:07:14.985 START TEST accel_xor 00:07:14.985 ************************************ 00:07:14.985 15:49:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:14.986 15:49:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.986 15:49:54 -- accel/accel.sh@17 -- # local accel_module 00:07:14.986 15:49:54 -- accel/accel.sh@19 -- # IFS=: 00:07:14.986 15:49:54 -- accel/accel.sh@19 -- # read -r var val 00:07:14.986 15:49:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:14.986 15:49:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:14.986 15:49:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.986 15:49:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.986 15:49:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.986 15:49:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.986 15:49:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.986 15:49:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.986 15:49:54 -- accel/accel.sh@40 -- # local IFS=, 00:07:14.986 15:49:54 -- accel/accel.sh@41 -- # jq -r . 00:07:14.986 [2024-04-26 15:49:54.614463] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:14.986 [2024-04-26 15:49:54.614550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289430 ] 00:07:14.986 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.245 [2024-04-26 15:49:54.714184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.512 [2024-04-26 15:49:54.931798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val= 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val= 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val=0x1 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val= 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val= 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val=xor 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val=3 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val= 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val=software 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@22 -- # accel_module=software 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val=32 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val=32 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val=1 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val=Yes 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val= 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 15:49:55 -- accel/accel.sh@20 -- # val= 00:07:15.512 15:49:55 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 15:49:55 -- accel/accel.sh@19 -- # read -r var val 00:07:17.461 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:17.461 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:17.461 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:17.461 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:17.461 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:17.461 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:17.461 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:17.461 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:17.461 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:17.461 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:17.461 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:17.461 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:17.461 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:17.461 15:49:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.461 15:49:57 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:17.461 15:49:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.461 00:07:17.461 real 0m2.566s 00:07:17.461 user 0m2.389s 00:07:17.461 sys 0m0.178s 00:07:17.461 15:49:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.461 15:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:17.461 ************************************ 00:07:17.461 END TEST accel_xor 00:07:17.461 ************************************ 00:07:17.721 15:49:57 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:17.722 15:49:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:17.722 15:49:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.722 15:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:17.722 ************************************ 00:07:17.722 START TEST accel_dif_verify 00:07:17.722 ************************************ 00:07:17.722 15:49:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:07:17.722 15:49:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.722 15:49:57 -- accel/accel.sh@17 -- # local accel_module 00:07:17.722 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:17.722 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:17.722 15:49:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:17.722 15:49:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:17.722 15:49:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.722 15:49:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.722 15:49:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.722 15:49:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.722 15:49:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.722 15:49:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.722 15:49:57 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.722 15:49:57 -- accel/accel.sh@41 -- # jq -r . 00:07:17.722 [2024-04-26 15:49:57.336893] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:17.722 [2024-04-26 15:49:57.336968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289916 ] 00:07:17.722 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.982 [2024-04-26 15:49:57.440431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.982 [2024-04-26 15:49:57.652395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val=0x1 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val=dif_verify 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val=software 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@22 -- # accel_module=software 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val=32 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val=32 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val=1 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val=No 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.243 15:49:57 -- accel/accel.sh@20 -- # val= 00:07:18.243 15:49:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.243 15:49:57 -- accel/accel.sh@19 -- # read -r var val 00:07:20.149 15:49:59 -- accel/accel.sh@20 -- # val= 00:07:20.149 15:49:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.149 15:49:59 -- accel/accel.sh@20 -- # val= 00:07:20.149 15:49:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.149 15:49:59 -- accel/accel.sh@20 -- # val= 00:07:20.149 15:49:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.149 15:49:59 -- accel/accel.sh@20 -- # val= 00:07:20.149 15:49:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.149 15:49:59 -- accel/accel.sh@20 -- # val= 00:07:20.149 15:49:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.149 15:49:59 -- accel/accel.sh@20 -- # val= 00:07:20.149 15:49:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.149 15:49:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.149 15:49:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.149 15:49:59 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:20.149 15:49:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.149 00:07:20.149 real 0m2.530s 00:07:20.149 user 0m2.344s 00:07:20.149 sys 0m0.188s 00:07:20.149 15:49:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.149 15:49:59 -- common/autotest_common.sh@10 -- # set +x 00:07:20.149 ************************************ 00:07:20.149 END TEST accel_dif_verify 00:07:20.149 ************************************ 00:07:20.408 15:49:59 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:20.408 15:49:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:20.408 15:49:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.408 15:49:59 -- common/autotest_common.sh@10 -- # set +x 00:07:20.408 ************************************ 00:07:20.408 START TEST accel_dif_generate 00:07:20.408 ************************************ 00:07:20.408 15:49:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:07:20.408 15:49:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.408 15:49:59 -- accel/accel.sh@17 -- # local accel_module 00:07:20.408 15:49:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.408 15:49:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.408 15:49:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:20.408 15:49:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:20.408 15:49:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.408 15:49:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.408 15:49:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.408 15:49:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.408 15:49:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.408 15:49:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.408 15:49:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:20.408 15:49:59 -- accel/accel.sh@41 -- # jq -r . 00:07:20.408 [2024-04-26 15:50:00.030950] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:20.408 [2024-04-26 15:50:00.031032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290399 ] 00:07:20.408 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.667 [2024-04-26 15:50:00.135218] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.927 [2024-04-26 15:50:00.359737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.186 15:50:00 -- accel/accel.sh@20 -- # val= 00:07:21.186 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 15:50:00 -- accel/accel.sh@20 -- # val= 00:07:21.186 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.186 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.186 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.186 15:50:00 -- accel/accel.sh@20 -- # val=0x1 00:07:21.186 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val= 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val= 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val=dif_generate 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val= 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val=software 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@22 -- # accel_module=software 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val=32 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val=32 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val=1 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val=No 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val= 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.187 15:50:00 -- accel/accel.sh@20 -- # val= 00:07:21.187 15:50:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.187 15:50:00 -- accel/accel.sh@19 -- # read -r var val 00:07:23.094 15:50:02 -- accel/accel.sh@20 -- # val= 00:07:23.094 15:50:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.094 15:50:02 -- accel/accel.sh@20 -- # val= 00:07:23.094 15:50:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.094 15:50:02 -- accel/accel.sh@20 -- # val= 00:07:23.094 15:50:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.094 15:50:02 -- accel/accel.sh@20 -- # val= 00:07:23.094 15:50:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.094 15:50:02 -- accel/accel.sh@20 -- # val= 00:07:23.094 15:50:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.094 15:50:02 -- accel/accel.sh@20 -- # val= 00:07:23.094 15:50:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.094 15:50:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.094 15:50:02 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:23.094 15:50:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.094 00:07:23.094 real 0m2.575s 00:07:23.094 user 0m2.388s 00:07:23.094 sys 0m0.188s 00:07:23.094 15:50:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.094 15:50:02 -- common/autotest_common.sh@10 -- # set +x 00:07:23.094 ************************************ 00:07:23.094 END TEST accel_dif_generate 00:07:23.094 ************************************ 00:07:23.094 15:50:02 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:23.094 15:50:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:23.094 15:50:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.094 15:50:02 -- common/autotest_common.sh@10 -- # set +x 00:07:23.094 ************************************ 00:07:23.094 START TEST accel_dif_generate_copy 00:07:23.094 ************************************ 00:07:23.094 15:50:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:07:23.094 15:50:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.094 15:50:02 -- accel/accel.sh@17 -- # local accel_module 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.094 15:50:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.094 15:50:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:23.095 15:50:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:23.095 15:50:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.095 15:50:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.095 15:50:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.095 15:50:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.095 15:50:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.095 15:50:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.095 15:50:02 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.095 15:50:02 -- accel/accel.sh@41 -- # jq -r . 00:07:23.354 [2024-04-26 15:50:02.780824] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:23.354 [2024-04-26 15:50:02.780920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290887 ] 00:07:23.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.354 [2024-04-26 15:50:02.886258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.613 [2024-04-26 15:50:03.097638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.872 15:50:03 -- accel/accel.sh@20 -- # val= 00:07:23.872 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.872 15:50:03 -- accel/accel.sh@20 -- # val= 00:07:23.872 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.872 15:50:03 -- accel/accel.sh@20 -- # val=0x1 00:07:23.872 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.872 15:50:03 -- accel/accel.sh@20 -- # val= 00:07:23.872 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.872 15:50:03 -- accel/accel.sh@20 -- # val= 00:07:23.872 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.872 15:50:03 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:23.872 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.872 15:50:03 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:23.872 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val= 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val=software 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@22 -- # accel_module=software 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val=32 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val=32 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val=1 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val=No 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val= 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:23.873 15:50:03 -- accel/accel.sh@20 -- # val= 00:07:23.873 15:50:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # IFS=: 00:07:23.873 15:50:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.779 15:50:05 -- accel/accel.sh@20 -- # val= 00:07:25.779 15:50:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.779 15:50:05 -- accel/accel.sh@20 -- # val= 00:07:25.779 15:50:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.779 15:50:05 -- accel/accel.sh@20 -- # val= 00:07:25.779 15:50:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.779 15:50:05 -- accel/accel.sh@20 -- # val= 00:07:25.779 15:50:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.779 15:50:05 -- accel/accel.sh@20 -- # val= 00:07:25.779 15:50:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.779 15:50:05 -- accel/accel.sh@20 -- # val= 00:07:25.779 15:50:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.779 15:50:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.779 15:50:05 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:25.779 15:50:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.779 00:07:25.779 real 0m2.533s 00:07:25.779 user 0m2.332s 00:07:25.779 sys 0m0.202s 00:07:25.779 15:50:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.779 15:50:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.779 ************************************ 00:07:25.779 END TEST accel_dif_generate_copy 00:07:25.779 ************************************ 00:07:25.779 15:50:05 -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:25.779 15:50:05 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.779 15:50:05 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:25.779 15:50:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.779 15:50:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.779 ************************************ 00:07:25.779 START TEST accel_comp 00:07:25.779 ************************************ 00:07:25.779 15:50:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.779 15:50:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.779 15:50:05 -- accel/accel.sh@17 -- # local accel_module 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.779 15:50:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.779 15:50:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.779 15:50:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.779 15:50:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.779 15:50:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.779 15:50:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.779 15:50:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.779 15:50:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.779 15:50:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.779 15:50:05 -- accel/accel.sh@40 -- # local IFS=, 00:07:25.779 15:50:05 -- accel/accel.sh@41 -- # jq -r . 00:07:26.037 [2024-04-26 15:50:05.469890] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:26.037 [2024-04-26 15:50:05.469971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291376 ] 00:07:26.037 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.037 [2024-04-26 15:50:05.575045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.296 [2024-04-26 15:50:05.789411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.555 15:50:06 -- accel/accel.sh@20 -- # val= 00:07:26.555 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.555 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.555 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.555 15:50:06 -- accel/accel.sh@20 -- # val= 00:07:26.555 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.555 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.555 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.555 15:50:06 -- accel/accel.sh@20 -- # val= 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val=0x1 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val= 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val= 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val=compress 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@23 -- # accel_opc=compress 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val= 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val=software 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@22 -- # accel_module=software 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val=32 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val=32 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val=1 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val=No 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val= 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:26.556 15:50:06 -- accel/accel.sh@20 -- # val= 00:07:26.556 15:50:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # IFS=: 00:07:26.556 15:50:06 -- accel/accel.sh@19 -- # read -r var val 00:07:28.464 15:50:07 -- accel/accel.sh@20 -- # val= 00:07:28.464 15:50:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.464 15:50:07 -- accel/accel.sh@20 -- # val= 00:07:28.464 15:50:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.464 15:50:07 -- accel/accel.sh@20 -- # val= 00:07:28.464 15:50:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.464 15:50:07 -- accel/accel.sh@20 -- # val= 00:07:28.464 15:50:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.464 15:50:07 -- accel/accel.sh@20 -- # val= 00:07:28.464 15:50:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.464 15:50:07 -- accel/accel.sh@20 -- # val= 00:07:28.464 15:50:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.464 15:50:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.464 15:50:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.464 15:50:07 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:28.464 15:50:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.464 00:07:28.464 real 0m2.549s 00:07:28.464 user 0m2.358s 00:07:28.464 sys 0m0.192s 00:07:28.464 15:50:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.464 15:50:07 -- common/autotest_common.sh@10 -- # set +x 00:07:28.464 ************************************ 00:07:28.464 END TEST accel_comp 00:07:28.464 ************************************ 00:07:28.464 15:50:08 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.464 15:50:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:28.464 15:50:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.464 15:50:08 -- common/autotest_common.sh@10 -- # set +x 00:07:28.464 ************************************ 00:07:28.464 START TEST accel_decomp 00:07:28.464 ************************************ 00:07:28.464 15:50:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.464 15:50:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.464 15:50:08 -- accel/accel.sh@17 -- # local accel_module 00:07:28.464 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:28.464 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:28.464 15:50:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.464 15:50:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.464 15:50:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.464 15:50:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.464 15:50:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.464 15:50:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.464 15:50:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.464 15:50:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.464 15:50:08 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.464 15:50:08 -- accel/accel.sh@41 -- # jq -r . 00:07:28.724 [2024-04-26 15:50:08.179764] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:28.724 [2024-04-26 15:50:08.179855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291855 ] 00:07:28.724 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.724 [2024-04-26 15:50:08.285190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.983 [2024-04-26 15:50:08.503087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val= 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val= 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val= 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val=0x1 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val= 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val= 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val=decompress 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val= 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val=software 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@22 -- # accel_module=software 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val=32 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val=32 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val=1 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val=Yes 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val= 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.243 15:50:08 -- accel/accel.sh@20 -- # val= 00:07:29.243 15:50:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.243 15:50:08 -- accel/accel.sh@19 -- # read -r var val 00:07:31.150 15:50:10 -- accel/accel.sh@20 -- # val= 00:07:31.150 15:50:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.150 15:50:10 -- accel/accel.sh@20 -- # val= 00:07:31.150 15:50:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.150 15:50:10 -- accel/accel.sh@20 -- # val= 00:07:31.150 15:50:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.150 15:50:10 -- accel/accel.sh@20 -- # val= 00:07:31.150 15:50:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.150 15:50:10 -- accel/accel.sh@20 -- # val= 00:07:31.150 15:50:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.150 15:50:10 -- accel/accel.sh@20 -- # val= 00:07:31.150 15:50:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.150 15:50:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.150 15:50:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.150 15:50:10 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.150 15:50:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.150 00:07:31.150 real 0m2.595s 00:07:31.150 user 0m2.417s 00:07:31.150 sys 0m0.179s 00:07:31.150 15:50:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.150 15:50:10 -- common/autotest_common.sh@10 -- # set +x 00:07:31.150 ************************************ 00:07:31.150 END TEST accel_decomp 00:07:31.150 ************************************ 00:07:31.150 15:50:10 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.150 15:50:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:31.150 15:50:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.150 15:50:10 -- common/autotest_common.sh@10 -- # set +x 00:07:31.410 ************************************ 00:07:31.410 START TEST accel_decmop_full 00:07:31.410 ************************************ 00:07:31.410 15:50:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.410 15:50:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.410 15:50:10 -- accel/accel.sh@17 -- # local accel_module 00:07:31.410 15:50:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.410 15:50:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.410 15:50:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.410 15:50:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:31.410 15:50:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.410 15:50:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.410 15:50:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.410 15:50:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.410 15:50:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.410 15:50:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.410 15:50:10 -- accel/accel.sh@40 -- # local IFS=, 00:07:31.410 15:50:10 -- accel/accel.sh@41 -- # jq -r . 00:07:31.410 [2024-04-26 15:50:10.926088] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:31.410 [2024-04-26 15:50:10.926166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292344 ] 00:07:31.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.410 [2024-04-26 15:50:11.027181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.670 [2024-04-26 15:50:11.243303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val= 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val= 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val= 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val=0x1 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val= 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val= 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val=decompress 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val= 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val=software 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@22 -- # accel_module=software 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val=32 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val=32 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val=1 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val=Yes 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val= 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.929 15:50:11 -- accel/accel.sh@20 -- # val= 00:07:31.929 15:50:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.929 15:50:11 -- accel/accel.sh@19 -- # read -r var val 00:07:33.836 15:50:13 -- accel/accel.sh@20 -- # val= 00:07:33.836 15:50:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # IFS=: 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # read -r var val 00:07:33.836 15:50:13 -- accel/accel.sh@20 -- # val= 00:07:33.836 15:50:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # IFS=: 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # read -r var val 00:07:33.836 15:50:13 -- accel/accel.sh@20 -- # val= 00:07:33.836 15:50:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # IFS=: 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # read -r var val 00:07:33.836 15:50:13 -- accel/accel.sh@20 -- # val= 00:07:33.836 15:50:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # IFS=: 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # read -r var val 00:07:33.836 15:50:13 -- accel/accel.sh@20 -- # val= 00:07:33.836 15:50:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # IFS=: 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # read -r var val 00:07:33.836 15:50:13 -- accel/accel.sh@20 -- # val= 00:07:33.836 15:50:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # IFS=: 00:07:33.836 15:50:13 -- accel/accel.sh@19 -- # read -r var val 00:07:33.836 15:50:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.836 15:50:13 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.836 15:50:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.836 00:07:33.836 real 0m2.551s 00:07:33.836 user 0m2.370s 00:07:33.836 sys 0m0.181s 00:07:33.836 15:50:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.836 15:50:13 -- common/autotest_common.sh@10 -- # set +x 00:07:33.836 ************************************ 00:07:33.836 END TEST accel_decmop_full 00:07:33.836 ************************************ 00:07:33.836 15:50:13 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:33.836 15:50:13 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:33.836 15:50:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.836 15:50:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.095 ************************************ 00:07:34.095 START TEST accel_decomp_mcore 00:07:34.095 ************************************ 00:07:34.095 15:50:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.095 15:50:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.095 15:50:13 -- accel/accel.sh@17 -- # local accel_module 00:07:34.095 15:50:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.095 15:50:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.095 15:50:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.095 15:50:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:34.095 15:50:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.095 15:50:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.095 15:50:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.095 15:50:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.095 15:50:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.095 15:50:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.095 15:50:13 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.095 15:50:13 -- accel/accel.sh@41 -- # jq -r . 00:07:34.095 [2024-04-26 15:50:13.633908] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:34.095 [2024-04-26 15:50:13.633985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2292830 ] 00:07:34.095 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.095 [2024-04-26 15:50:13.736570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.354 [2024-04-26 15:50:13.954913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.355 [2024-04-26 15:50:13.954983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.355 [2024-04-26 15:50:13.955047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.355 [2024-04-26 15:50:13.955052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val= 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val= 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val= 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val=0xf 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val= 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val= 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val=decompress 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val= 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val=software 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@22 -- # accel_module=software 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val=32 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val=32 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val=1 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val=Yes 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val= 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:34.614 15:50:14 -- accel/accel.sh@20 -- # val= 00:07:34.614 15:50:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # IFS=: 00:07:34.614 15:50:14 -- accel/accel.sh@19 -- # read -r var val 00:07:36.520 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.520 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.521 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.521 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.521 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.521 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.521 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.521 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.521 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:36.521 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.521 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.521 15:50:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.521 15:50:16 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.521 15:50:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.521 00:07:36.521 real 0m2.609s 00:07:36.521 user 0m7.897s 00:07:36.521 sys 0m0.198s 00:07:36.521 15:50:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.521 15:50:16 -- common/autotest_common.sh@10 -- # set +x 00:07:36.521 ************************************ 00:07:36.521 END TEST accel_decomp_mcore 00:07:36.521 ************************************ 00:07:36.781 15:50:16 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.781 15:50:16 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:36.781 15:50:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.781 15:50:16 -- common/autotest_common.sh@10 -- # set +x 00:07:36.781 ************************************ 00:07:36.781 START TEST accel_decomp_full_mcore 00:07:36.781 ************************************ 00:07:36.781 15:50:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.781 15:50:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.781 15:50:16 -- accel/accel.sh@17 -- # local accel_module 00:07:36.781 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:36.781 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:36.781 15:50:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.781 15:50:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:36.781 15:50:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.781 15:50:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.781 15:50:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.781 15:50:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.781 15:50:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.781 15:50:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.781 15:50:16 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.781 15:50:16 -- accel/accel.sh@41 -- # jq -r . 00:07:36.781 [2024-04-26 15:50:16.392935] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:36.781 [2024-04-26 15:50:16.393005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293315 ] 00:07:36.781 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.040 [2024-04-26 15:50:16.494495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.041 [2024-04-26 15:50:16.713912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.041 [2024-04-26 15:50:16.713986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.041 [2024-04-26 15:50:16.714052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.041 [2024-04-26 15:50:16.714056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.300 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:37.300 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:37.300 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.300 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.300 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:37.300 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.300 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val=0xf 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val=decompress 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val=software 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@22 -- # accel_module=software 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val=32 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val=32 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val=1 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val=Yes 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.560 15:50:16 -- accel/accel.sh@20 -- # val= 00:07:37.560 15:50:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.560 15:50:16 -- accel/accel.sh@19 -- # read -r var val 00:07:39.476 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:39.477 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.477 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.477 15:50:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.477 15:50:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:39.477 15:50:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.477 00:07:39.477 real 0m2.674s 00:07:39.477 user 0m8.143s 00:07:39.477 sys 0m0.201s 00:07:39.477 15:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:39.477 15:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:39.477 ************************************ 00:07:39.477 END TEST accel_decomp_full_mcore 00:07:39.477 ************************************ 00:07:39.477 15:50:19 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.477 15:50:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:39.477 15:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.477 15:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:39.736 ************************************ 00:07:39.736 START TEST accel_decomp_mthread 00:07:39.736 ************************************ 00:07:39.736 15:50:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.736 15:50:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.736 15:50:19 -- accel/accel.sh@17 -- # local accel_module 00:07:39.736 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:39.736 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:39.736 15:50:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.736 15:50:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:39.736 15:50:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.736 15:50:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.736 15:50:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.736 15:50:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.736 15:50:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.736 15:50:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.736 15:50:19 -- accel/accel.sh@40 -- # local IFS=, 00:07:39.736 15:50:19 -- accel/accel.sh@41 -- # jq -r . 00:07:39.736 [2024-04-26 15:50:19.226091] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:39.736 [2024-04-26 15:50:19.226162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2293806 ] 00:07:39.736 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.736 [2024-04-26 15:50:19.327867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.996 [2024-04-26 15:50:19.542795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val=0x1 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val=decompress 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val=software 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val=32 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val=32 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val=2 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val=Yes 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.264 15:50:19 -- accel/accel.sh@20 -- # val= 00:07:40.264 15:50:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.264 15:50:19 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@20 -- # val= 00:07:42.310 15:50:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # IFS=: 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@20 -- # val= 00:07:42.310 15:50:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # IFS=: 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@20 -- # val= 00:07:42.310 15:50:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # IFS=: 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@20 -- # val= 00:07:42.310 15:50:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # IFS=: 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@20 -- # val= 00:07:42.310 15:50:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # IFS=: 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@20 -- # val= 00:07:42.310 15:50:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # IFS=: 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@20 -- # val= 00:07:42.310 15:50:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # IFS=: 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.310 15:50:21 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:42.310 15:50:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.310 00:07:42.310 real 0m2.535s 00:07:42.310 user 0m2.365s 00:07:42.310 sys 0m0.185s 00:07:42.310 15:50:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.310 15:50:21 -- common/autotest_common.sh@10 -- # set +x 00:07:42.310 ************************************ 00:07:42.310 END TEST accel_decomp_mthread 00:07:42.310 ************************************ 00:07:42.310 15:50:21 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.310 15:50:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:42.310 15:50:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.310 15:50:21 -- common/autotest_common.sh@10 -- # set +x 00:07:42.310 ************************************ 00:07:42.310 START TEST accel_deomp_full_mthread 00:07:42.310 ************************************ 00:07:42.310 15:50:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.310 15:50:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.310 15:50:21 -- accel/accel.sh@17 -- # local accel_module 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # IFS=: 00:07:42.310 15:50:21 -- accel/accel.sh@19 -- # read -r var val 00:07:42.310 15:50:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.310 15:50:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:42.310 15:50:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.310 15:50:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.310 15:50:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.310 15:50:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.310 15:50:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.310 15:50:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.310 15:50:21 -- accel/accel.sh@40 -- # local IFS=, 00:07:42.310 15:50:21 -- accel/accel.sh@41 -- # jq -r . 00:07:42.310 [2024-04-26 15:50:21.923327] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:42.310 [2024-04-26 15:50:21.923404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294294 ] 00:07:42.310 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.570 [2024-04-26 15:50:22.024903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.570 [2024-04-26 15:50:22.237673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val= 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val= 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val= 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val=0x1 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val= 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val= 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val=decompress 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val= 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val=software 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@22 -- # accel_module=software 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.829 15:50:22 -- accel/accel.sh@20 -- # val=32 00:07:42.829 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.829 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.830 15:50:22 -- accel/accel.sh@20 -- # val=32 00:07:42.830 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.830 15:50:22 -- accel/accel.sh@20 -- # val=2 00:07:42.830 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.830 15:50:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.830 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.830 15:50:22 -- accel/accel.sh@20 -- # val=Yes 00:07:42.830 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.830 15:50:22 -- accel/accel.sh@20 -- # val= 00:07:42.830 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:42.830 15:50:22 -- accel/accel.sh@20 -- # val= 00:07:42.830 15:50:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # IFS=: 00:07:42.830 15:50:22 -- accel/accel.sh@19 -- # read -r var val 00:07:45.365 15:50:24 -- accel/accel.sh@20 -- # val= 00:07:45.365 15:50:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # IFS=: 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # read -r var val 00:07:45.365 15:50:24 -- accel/accel.sh@20 -- # val= 00:07:45.365 15:50:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # IFS=: 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # read -r var val 00:07:45.365 15:50:24 -- accel/accel.sh@20 -- # val= 00:07:45.365 15:50:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # IFS=: 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # read -r var val 00:07:45.365 15:50:24 -- accel/accel.sh@20 -- # val= 00:07:45.365 15:50:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # IFS=: 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # read -r var val 00:07:45.365 15:50:24 -- accel/accel.sh@20 -- # val= 00:07:45.365 15:50:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # IFS=: 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # read -r var val 00:07:45.365 15:50:24 -- accel/accel.sh@20 -- # val= 00:07:45.365 15:50:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # IFS=: 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # read -r var val 00:07:45.365 15:50:24 -- accel/accel.sh@20 -- # val= 00:07:45.365 15:50:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # IFS=: 00:07:45.365 15:50:24 -- accel/accel.sh@19 -- # read -r var val 00:07:45.365 15:50:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.365 15:50:24 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.365 15:50:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.365 00:07:45.365 real 0m2.575s 00:07:45.365 user 0m2.409s 00:07:45.365 sys 0m0.180s 00:07:45.365 15:50:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.365 15:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:45.365 ************************************ 00:07:45.365 END TEST accel_deomp_full_mthread 00:07:45.365 ************************************ 00:07:45.365 15:50:24 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:45.365 15:50:24 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:45.365 15:50:24 -- accel/accel.sh@137 -- # build_accel_config 00:07:45.365 15:50:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:45.365 15:50:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.365 15:50:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.365 15:50:24 -- common/autotest_common.sh@10 -- # set +x 00:07:45.365 15:50:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.365 15:50:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.365 15:50:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.365 15:50:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.365 15:50:24 -- accel/accel.sh@40 -- # local IFS=, 00:07:45.365 15:50:24 -- accel/accel.sh@41 -- # jq -r . 00:07:45.365 ************************************ 00:07:45.365 START TEST accel_dif_functional_tests 00:07:45.365 ************************************ 00:07:45.365 15:50:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:45.365 [2024-04-26 15:50:24.674359] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:45.365 [2024-04-26 15:50:24.674435] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294763 ] 00:07:45.365 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.365 [2024-04-26 15:50:24.776669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.365 [2024-04-26 15:50:24.989003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.365 [2024-04-26 15:50:24.989075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.365 [2024-04-26 15:50:24.989081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.934 00:07:45.934 00:07:45.934 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.934 http://cunit.sourceforge.net/ 00:07:45.934 00:07:45.934 00:07:45.934 Suite: accel_dif 00:07:45.934 Test: verify: DIF generated, GUARD check ...passed 00:07:45.934 Test: verify: DIF generated, APPTAG check ...passed 00:07:45.934 Test: verify: DIF generated, REFTAG check ...passed 00:07:45.934 Test: verify: DIF not generated, GUARD check ...[2024-04-26 15:50:25.357110] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:45.934 [2024-04-26 15:50:25.357167] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:45.934 passed 00:07:45.934 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 15:50:25.357214] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:45.934 [2024-04-26 15:50:25.357242] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:45.934 passed 00:07:45.934 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 15:50:25.357270] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:45.934 [2024-04-26 15:50:25.357293] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:45.934 passed 00:07:45.935 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:45.935 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 15:50:25.357373] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:45.935 passed 00:07:45.935 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:45.935 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:45.935 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:45.935 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 15:50:25.357535] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:45.935 passed 00:07:45.935 Test: generate copy: DIF generated, GUARD check ...passed 00:07:45.935 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:45.935 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:45.935 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:45.935 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:45.935 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:45.935 Test: generate copy: iovecs-len validate ...[2024-04-26 15:50:25.357828] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:45.935 passed 00:07:45.935 Test: generate copy: buffer alignment validate ...passed 00:07:45.935 00:07:45.935 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.935 suites 1 1 n/a 0 0 00:07:45.935 tests 20 20 20 0 0 00:07:45.935 asserts 204 204 204 0 n/a 00:07:45.935 00:07:45.935 Elapsed time = 0.003 seconds 00:07:47.315 00:07:47.315 real 0m1.984s 00:07:47.315 user 0m4.112s 00:07:47.315 sys 0m0.236s 00:07:47.315 15:50:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.315 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.315 ************************************ 00:07:47.315 END TEST accel_dif_functional_tests 00:07:47.315 ************************************ 00:07:47.315 00:07:47.315 real 1m4.429s 00:07:47.315 user 1m11.127s 00:07:47.315 sys 0m7.183s 00:07:47.315 15:50:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.315 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.315 ************************************ 00:07:47.315 END TEST accel 00:07:47.315 ************************************ 00:07:47.315 15:50:26 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:47.315 15:50:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.315 15:50:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.315 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.315 ************************************ 00:07:47.315 START TEST accel_rpc 00:07:47.315 ************************************ 00:07:47.315 15:50:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:47.315 * Looking for test storage... 00:07:47.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:47.315 15:50:26 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:47.315 15:50:26 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2295094 00:07:47.315 15:50:26 -- accel/accel_rpc.sh@15 -- # waitforlisten 2295094 00:07:47.315 15:50:26 -- common/autotest_common.sh@817 -- # '[' -z 2295094 ']' 00:07:47.315 15:50:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.315 15:50:26 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:47.315 15:50:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:47.315 15:50:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.315 15:50:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:47.315 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.315 [2024-04-26 15:50:26.966732] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:47.315 [2024-04-26 15:50:26.966819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2295094 ] 00:07:47.575 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.575 [2024-04-26 15:50:27.073188] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.835 [2024-04-26 15:50:27.293120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.094 15:50:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:48.094 15:50:27 -- common/autotest_common.sh@850 -- # return 0 00:07:48.094 15:50:27 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:48.094 15:50:27 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:48.094 15:50:27 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:48.094 15:50:27 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:48.094 15:50:27 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:48.094 15:50:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:48.094 15:50:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.094 15:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:48.354 ************************************ 00:07:48.354 START TEST accel_assign_opcode 00:07:48.354 ************************************ 00:07:48.354 15:50:27 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:48.354 15:50:27 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:48.354 15:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.354 15:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:48.354 [2024-04-26 15:50:27.855090] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:48.354 15:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.354 15:50:27 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:48.354 15:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.354 15:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:48.354 [2024-04-26 15:50:27.863091] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:48.354 15:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.354 15:50:27 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:48.354 15:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.354 15:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:49.292 15:50:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.292 15:50:28 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:49.292 15:50:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.292 15:50:28 -- common/autotest_common.sh@10 -- # set +x 00:07:49.292 15:50:28 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:49.292 15:50:28 -- accel/accel_rpc.sh@42 -- # grep software 00:07:49.292 15:50:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.292 software 00:07:49.292 00:07:49.292 real 0m0.959s 00:07:49.292 user 0m0.043s 00:07:49.292 sys 0m0.007s 00:07:49.292 15:50:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.292 15:50:28 -- common/autotest_common.sh@10 -- # set +x 00:07:49.292 ************************************ 00:07:49.292 END TEST accel_assign_opcode 00:07:49.293 ************************************ 00:07:49.293 15:50:28 -- accel/accel_rpc.sh@55 -- # killprocess 2295094 00:07:49.293 15:50:28 -- common/autotest_common.sh@936 -- # '[' -z 2295094 ']' 00:07:49.293 15:50:28 -- common/autotest_common.sh@940 -- # kill -0 2295094 00:07:49.293 15:50:28 -- common/autotest_common.sh@941 -- # uname 00:07:49.293 15:50:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.293 15:50:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2295094 00:07:49.293 15:50:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:49.293 15:50:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:49.293 15:50:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2295094' 00:07:49.293 killing process with pid 2295094 00:07:49.293 15:50:28 -- common/autotest_common.sh@955 -- # kill 2295094 00:07:49.293 15:50:28 -- common/autotest_common.sh@960 -- # wait 2295094 00:07:51.832 00:07:51.832 real 0m4.473s 00:07:51.832 user 0m4.436s 00:07:51.832 sys 0m0.582s 00:07:51.832 15:50:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.832 15:50:31 -- common/autotest_common.sh@10 -- # set +x 00:07:51.832 ************************************ 00:07:51.832 END TEST accel_rpc 00:07:51.832 ************************************ 00:07:51.832 15:50:31 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:51.832 15:50:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.832 15:50:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.832 15:50:31 -- common/autotest_common.sh@10 -- # set +x 00:07:51.832 ************************************ 00:07:51.832 START TEST app_cmdline 00:07:51.832 ************************************ 00:07:51.832 15:50:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:51.832 * Looking for test storage... 00:07:52.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:52.092 15:50:31 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:52.092 15:50:31 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2296037 00:07:52.092 15:50:31 -- app/cmdline.sh@18 -- # waitforlisten 2296037 00:07:52.092 15:50:31 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:52.092 15:50:31 -- common/autotest_common.sh@817 -- # '[' -z 2296037 ']' 00:07:52.092 15:50:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.092 15:50:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:52.092 15:50:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.092 15:50:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:52.092 15:50:31 -- common/autotest_common.sh@10 -- # set +x 00:07:52.092 [2024-04-26 15:50:31.606209] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:07:52.092 [2024-04-26 15:50:31.606297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2296037 ] 00:07:52.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.092 [2024-04-26 15:50:31.707608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.351 [2024-04-26 15:50:31.935845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.289 15:50:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:53.289 15:50:32 -- common/autotest_common.sh@850 -- # return 0 00:07:53.289 15:50:32 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:53.547 { 00:07:53.547 "version": "SPDK v24.05-pre git sha1 8571999d8", 00:07:53.547 "fields": { 00:07:53.547 "major": 24, 00:07:53.547 "minor": 5, 00:07:53.547 "patch": 0, 00:07:53.547 "suffix": "-pre", 00:07:53.547 "commit": "8571999d8" 00:07:53.547 } 00:07:53.547 } 00:07:53.547 15:50:33 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:53.547 15:50:33 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:53.547 15:50:33 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:53.547 15:50:33 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:53.547 15:50:33 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:53.547 15:50:33 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:53.547 15:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.547 15:50:33 -- app/cmdline.sh@26 -- # sort 00:07:53.547 15:50:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.547 15:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.547 15:50:33 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:53.547 15:50:33 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:53.547 15:50:33 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.547 15:50:33 -- common/autotest_common.sh@638 -- # local es=0 00:07:53.547 15:50:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.547 15:50:33 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.547 15:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:53.547 15:50:33 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.547 15:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:53.547 15:50:33 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.547 15:50:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:53.548 15:50:33 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.548 15:50:33 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:53.548 15:50:33 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.806 request: 00:07:53.806 { 00:07:53.806 "method": "env_dpdk_get_mem_stats", 00:07:53.806 "req_id": 1 00:07:53.806 } 00:07:53.806 Got JSON-RPC error response 00:07:53.806 response: 00:07:53.806 { 00:07:53.806 "code": -32601, 00:07:53.806 "message": "Method not found" 00:07:53.806 } 00:07:53.806 15:50:33 -- common/autotest_common.sh@641 -- # es=1 00:07:53.806 15:50:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:53.806 15:50:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:53.806 15:50:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:53.806 15:50:33 -- app/cmdline.sh@1 -- # killprocess 2296037 00:07:53.806 15:50:33 -- common/autotest_common.sh@936 -- # '[' -z 2296037 ']' 00:07:53.806 15:50:33 -- common/autotest_common.sh@940 -- # kill -0 2296037 00:07:53.806 15:50:33 -- common/autotest_common.sh@941 -- # uname 00:07:53.806 15:50:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.806 15:50:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2296037 00:07:53.806 15:50:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.806 15:50:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.806 15:50:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2296037' 00:07:53.806 killing process with pid 2296037 00:07:53.806 15:50:33 -- common/autotest_common.sh@955 -- # kill 2296037 00:07:53.806 15:50:33 -- common/autotest_common.sh@960 -- # wait 2296037 00:07:56.335 00:07:56.335 real 0m4.281s 00:07:56.335 user 0m4.455s 00:07:56.335 sys 0m0.571s 00:07:56.335 15:50:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.335 15:50:35 -- common/autotest_common.sh@10 -- # set +x 00:07:56.335 ************************************ 00:07:56.335 END TEST app_cmdline 00:07:56.335 ************************************ 00:07:56.335 15:50:35 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:56.335 15:50:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.335 15:50:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.335 15:50:35 -- common/autotest_common.sh@10 -- # set +x 00:07:56.335 ************************************ 00:07:56.335 START TEST version 00:07:56.335 ************************************ 00:07:56.335 15:50:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:56.335 * Looking for test storage... 00:07:56.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:56.335 15:50:35 -- app/version.sh@17 -- # get_header_version major 00:07:56.336 15:50:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:56.336 15:50:35 -- app/version.sh@14 -- # cut -f2 00:07:56.336 15:50:35 -- app/version.sh@14 -- # tr -d '"' 00:07:56.336 15:50:35 -- app/version.sh@17 -- # major=24 00:07:56.336 15:50:35 -- app/version.sh@18 -- # get_header_version minor 00:07:56.336 15:50:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:56.336 15:50:35 -- app/version.sh@14 -- # cut -f2 00:07:56.336 15:50:35 -- app/version.sh@14 -- # tr -d '"' 00:07:56.336 15:50:35 -- app/version.sh@18 -- # minor=5 00:07:56.336 15:50:35 -- app/version.sh@19 -- # get_header_version patch 00:07:56.336 15:50:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:56.336 15:50:35 -- app/version.sh@14 -- # cut -f2 00:07:56.336 15:50:35 -- app/version.sh@14 -- # tr -d '"' 00:07:56.336 15:50:35 -- app/version.sh@19 -- # patch=0 00:07:56.336 15:50:35 -- app/version.sh@20 -- # get_header_version suffix 00:07:56.336 15:50:35 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:56.336 15:50:35 -- app/version.sh@14 -- # tr -d '"' 00:07:56.336 15:50:35 -- app/version.sh@14 -- # cut -f2 00:07:56.336 15:50:35 -- app/version.sh@20 -- # suffix=-pre 00:07:56.336 15:50:35 -- app/version.sh@22 -- # version=24.5 00:07:56.336 15:50:35 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:56.336 15:50:35 -- app/version.sh@28 -- # version=24.5rc0 00:07:56.336 15:50:35 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:56.336 15:50:35 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:56.336 15:50:36 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:56.336 15:50:36 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:56.336 00:07:56.336 real 0m0.154s 00:07:56.336 user 0m0.081s 00:07:56.336 sys 0m0.108s 00:07:56.336 15:50:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.336 15:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:56.336 ************************************ 00:07:56.336 END TEST version 00:07:56.336 ************************************ 00:07:56.595 15:50:36 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:56.595 15:50:36 -- spdk/autotest.sh@194 -- # uname -s 00:07:56.595 15:50:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:56.595 15:50:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:56.595 15:50:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:56.595 15:50:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:56.595 15:50:36 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:56.595 15:50:36 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:56.595 15:50:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:56.595 15:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:56.595 15:50:36 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:56.595 15:50:36 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:56.595 15:50:36 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:56.595 15:50:36 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:56.595 15:50:36 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:07:56.595 15:50:36 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:07:56.595 15:50:36 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:56.595 15:50:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.595 15:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.595 15:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:56.595 ************************************ 00:07:56.595 START TEST nvmf_tcp 00:07:56.595 ************************************ 00:07:56.595 15:50:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:56.854 * Looking for test storage... 00:07:56.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:56.854 15:50:36 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:56.854 15:50:36 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:56.854 15:50:36 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.854 15:50:36 -- nvmf/common.sh@7 -- # uname -s 00:07:56.854 15:50:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.854 15:50:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.854 15:50:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.854 15:50:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.854 15:50:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.854 15:50:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.854 15:50:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.854 15:50:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.854 15:50:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.854 15:50:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.854 15:50:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:56.854 15:50:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:56.854 15:50:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.854 15:50:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.854 15:50:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.854 15:50:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.854 15:50:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.854 15:50:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.854 15:50:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.854 15:50:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.854 15:50:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.854 15:50:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.854 15:50:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.854 15:50:36 -- paths/export.sh@5 -- # export PATH 00:07:56.854 15:50:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.854 15:50:36 -- nvmf/common.sh@47 -- # : 0 00:07:56.854 15:50:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.854 15:50:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.854 15:50:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.854 15:50:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.854 15:50:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.854 15:50:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.854 15:50:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.854 15:50:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.854 15:50:36 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:56.854 15:50:36 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:56.854 15:50:36 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:56.854 15:50:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:56.854 15:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:56.854 15:50:36 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:56.854 15:50:36 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:56.854 15:50:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:56.854 15:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.854 15:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:56.854 ************************************ 00:07:56.854 START TEST nvmf_example 00:07:56.854 ************************************ 00:07:56.854 15:50:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:57.112 * Looking for test storage... 00:07:57.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.112 15:50:36 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.112 15:50:36 -- nvmf/common.sh@7 -- # uname -s 00:07:57.112 15:50:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.112 15:50:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.112 15:50:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.112 15:50:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.112 15:50:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.112 15:50:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.112 15:50:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.112 15:50:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.112 15:50:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.112 15:50:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.112 15:50:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.112 15:50:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.112 15:50:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.112 15:50:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.112 15:50:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.112 15:50:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.112 15:50:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.112 15:50:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.112 15:50:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.112 15:50:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.112 15:50:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.112 15:50:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.112 15:50:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.112 15:50:36 -- paths/export.sh@5 -- # export PATH 00:07:57.112 15:50:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.112 15:50:36 -- nvmf/common.sh@47 -- # : 0 00:07:57.112 15:50:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.113 15:50:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.113 15:50:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.113 15:50:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.113 15:50:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.113 15:50:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.113 15:50:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.113 15:50:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.113 15:50:36 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:57.113 15:50:36 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:57.113 15:50:36 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:57.113 15:50:36 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:57.113 15:50:36 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:57.113 15:50:36 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:57.113 15:50:36 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:57.113 15:50:36 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:57.113 15:50:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:57.113 15:50:36 -- common/autotest_common.sh@10 -- # set +x 00:07:57.113 15:50:36 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:57.113 15:50:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:57.113 15:50:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.113 15:50:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:57.113 15:50:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:57.113 15:50:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:57.113 15:50:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.113 15:50:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.113 15:50:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.113 15:50:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:57.113 15:50:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:57.113 15:50:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.113 15:50:36 -- common/autotest_common.sh@10 -- # set +x 00:08:02.393 15:50:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:02.393 15:50:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.393 15:50:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.393 15:50:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.393 15:50:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.393 15:50:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.393 15:50:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.393 15:50:41 -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.393 15:50:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.393 15:50:41 -- nvmf/common.sh@296 -- # e810=() 00:08:02.393 15:50:41 -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.393 15:50:41 -- nvmf/common.sh@297 -- # x722=() 00:08:02.393 15:50:41 -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.393 15:50:41 -- nvmf/common.sh@298 -- # mlx=() 00:08:02.393 15:50:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.393 15:50:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.393 15:50:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.393 15:50:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.393 15:50:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.393 15:50:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.393 15:50:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:02.393 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:02.393 15:50:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.393 15:50:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:02.393 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:02.393 15:50:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.393 15:50:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.393 15:50:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.393 15:50:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.393 15:50:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.393 15:50:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.393 15:50:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:02.393 Found net devices under 0000:86:00.0: cvl_0_0 00:08:02.393 15:50:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.393 15:50:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.393 15:50:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.393 15:50:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:02.393 15:50:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.394 15:50:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:02.394 Found net devices under 0000:86:00.1: cvl_0_1 00:08:02.394 15:50:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.394 15:50:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:02.394 15:50:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:02.394 15:50:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:02.394 15:50:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:02.394 15:50:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:02.394 15:50:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.394 15:50:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.394 15:50:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.394 15:50:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.394 15:50:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.394 15:50:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.394 15:50:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.394 15:50:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.394 15:50:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.394 15:50:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.394 15:50:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.394 15:50:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.394 15:50:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.394 15:50:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.394 15:50:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.652 15:50:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.652 15:50:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.652 15:50:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.652 15:50:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.652 15:50:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:08:02.652 00:08:02.652 --- 10.0.0.2 ping statistics --- 00:08:02.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.652 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:02.652 15:50:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:08:02.652 00:08:02.652 --- 10.0.0.1 ping statistics --- 00:08:02.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.652 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:02.652 15:50:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.652 15:50:42 -- nvmf/common.sh@411 -- # return 0 00:08:02.652 15:50:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:02.652 15:50:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.652 15:50:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:02.652 15:50:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:02.652 15:50:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.652 15:50:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:02.652 15:50:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:02.652 15:50:42 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:02.652 15:50:42 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:02.652 15:50:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:02.652 15:50:42 -- common/autotest_common.sh@10 -- # set +x 00:08:02.652 15:50:42 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:02.652 15:50:42 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:02.652 15:50:42 -- target/nvmf_example.sh@34 -- # nvmfpid=2299975 00:08:02.652 15:50:42 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.652 15:50:42 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:02.652 15:50:42 -- target/nvmf_example.sh@36 -- # waitforlisten 2299975 00:08:02.652 15:50:42 -- common/autotest_common.sh@817 -- # '[' -z 2299975 ']' 00:08:02.652 15:50:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.652 15:50:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:02.652 15:50:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.652 15:50:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:02.652 15:50:42 -- common/autotest_common.sh@10 -- # set +x 00:08:02.910 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.477 15:50:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:03.477 15:50:43 -- common/autotest_common.sh@850 -- # return 0 00:08:03.477 15:50:43 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:03.477 15:50:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:03.477 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.477 15:50:43 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.477 15:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.477 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.477 15:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.477 15:50:43 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:03.477 15:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.736 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.736 15:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.736 15:50:43 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:03.736 15:50:43 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.736 15:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.736 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.736 15:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.736 15:50:43 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:03.736 15:50:43 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.736 15:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.736 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.736 15:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.736 15:50:43 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.736 15:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:03.736 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:03.736 15:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:03.736 15:50:43 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:03.736 15:50:43 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:03.736 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.961 Initializing NVMe Controllers 00:08:15.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:15.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:15.961 Initialization complete. Launching workers. 00:08:15.961 ======================================================== 00:08:15.961 Latency(us) 00:08:15.961 Device Information : IOPS MiB/s Average min max 00:08:15.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13033.31 50.91 4910.26 850.81 15336.63 00:08:15.961 ======================================================== 00:08:15.961 Total : 13033.31 50.91 4910.26 850.81 15336.63 00:08:15.961 00:08:15.961 15:50:53 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:15.961 15:50:53 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:15.961 15:50:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:15.961 15:50:53 -- nvmf/common.sh@117 -- # sync 00:08:15.961 15:50:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.961 15:50:53 -- nvmf/common.sh@120 -- # set +e 00:08:15.961 15:50:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.961 15:50:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.961 rmmod nvme_tcp 00:08:15.961 rmmod nvme_fabrics 00:08:15.961 rmmod nvme_keyring 00:08:15.961 15:50:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.961 15:50:53 -- nvmf/common.sh@124 -- # set -e 00:08:15.961 15:50:53 -- nvmf/common.sh@125 -- # return 0 00:08:15.961 15:50:53 -- nvmf/common.sh@478 -- # '[' -n 2299975 ']' 00:08:15.961 15:50:53 -- nvmf/common.sh@479 -- # killprocess 2299975 00:08:15.961 15:50:53 -- common/autotest_common.sh@936 -- # '[' -z 2299975 ']' 00:08:15.961 15:50:53 -- common/autotest_common.sh@940 -- # kill -0 2299975 00:08:15.961 15:50:53 -- common/autotest_common.sh@941 -- # uname 00:08:15.961 15:50:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:15.961 15:50:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2299975 00:08:15.961 15:50:53 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:15.961 15:50:53 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:15.961 15:50:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2299975' 00:08:15.961 killing process with pid 2299975 00:08:15.961 15:50:53 -- common/autotest_common.sh@955 -- # kill 2299975 00:08:15.961 15:50:53 -- common/autotest_common.sh@960 -- # wait 2299975 00:08:15.961 nvmf threads initialize successfully 00:08:15.961 bdev subsystem init successfully 00:08:15.961 created a nvmf target service 00:08:15.961 create targets's poll groups done 00:08:15.961 all subsystems of target started 00:08:15.961 nvmf target is running 00:08:15.961 all subsystems of target stopped 00:08:15.961 destroy targets's poll groups done 00:08:15.961 destroyed the nvmf target service 00:08:15.961 bdev subsystem finish successfully 00:08:15.961 nvmf threads destroy successfully 00:08:15.961 15:50:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:15.961 15:50:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:15.961 15:50:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:15.961 15:50:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.961 15:50:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.961 15:50:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.961 15:50:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.961 15:50:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.337 15:50:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.337 15:50:56 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:17.337 15:50:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:17.337 15:50:56 -- common/autotest_common.sh@10 -- # set +x 00:08:17.337 00:08:17.337 real 0m20.540s 00:08:17.337 user 0m48.867s 00:08:17.337 sys 0m5.757s 00:08:17.337 15:50:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:17.337 15:50:57 -- common/autotest_common.sh@10 -- # set +x 00:08:17.337 ************************************ 00:08:17.337 END TEST nvmf_example 00:08:17.337 ************************************ 00:08:17.596 15:50:57 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:17.596 15:50:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.596 15:50:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.596 15:50:57 -- common/autotest_common.sh@10 -- # set +x 00:08:17.596 ************************************ 00:08:17.596 START TEST nvmf_filesystem 00:08:17.596 ************************************ 00:08:17.596 15:50:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:17.596 * Looking for test storage... 00:08:17.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.596 15:50:57 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:17.596 15:50:57 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:17.596 15:50:57 -- common/autotest_common.sh@34 -- # set -e 00:08:17.596 15:50:57 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:17.596 15:50:57 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:17.596 15:50:57 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:17.596 15:50:57 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:17.596 15:50:57 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:17.596 15:50:57 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:17.596 15:50:57 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:17.596 15:50:57 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:17.596 15:50:57 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:17.596 15:50:57 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:17.596 15:50:57 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:17.596 15:50:57 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:17.596 15:50:57 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:17.596 15:50:57 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:17.596 15:50:57 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:17.596 15:50:57 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:17.596 15:50:57 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:17.596 15:50:57 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:17.596 15:50:57 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:17.596 15:50:57 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:17.596 15:50:57 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:17.596 15:50:57 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:17.596 15:50:57 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:17.596 15:50:57 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:17.596 15:50:57 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:17.596 15:50:57 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:17.596 15:50:57 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:17.596 15:50:57 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:17.596 15:50:57 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:17.596 15:50:57 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:17.596 15:50:57 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:17.596 15:50:57 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:17.596 15:50:57 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:17.596 15:50:57 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:17.596 15:50:57 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:17.596 15:50:57 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:17.596 15:50:57 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:17.596 15:50:57 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:17.596 15:50:57 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:17.596 15:50:57 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:17.596 15:50:57 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:17.596 15:50:57 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:17.596 15:50:57 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:17.596 15:50:57 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:17.596 15:50:57 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:17.596 15:50:57 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:17.596 15:50:57 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:17.596 15:50:57 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:17.596 15:50:57 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:17.596 15:50:57 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:17.596 15:50:57 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:17.596 15:50:57 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:17.596 15:50:57 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:17.596 15:50:57 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:17.596 15:50:57 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:17.596 15:50:57 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:17.596 15:50:57 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:17.596 15:50:57 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:08:17.596 15:50:57 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:08:17.596 15:50:57 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:08:17.596 15:50:57 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:08:17.596 15:50:57 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:08:17.596 15:50:57 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:08:17.596 15:50:57 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:08:17.596 15:50:57 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:08:17.596 15:50:57 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:08:17.596 15:50:57 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:08:17.596 15:50:57 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:08:17.596 15:50:57 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:08:17.596 15:50:57 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:08:17.597 15:50:57 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:08:17.597 15:50:57 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:08:17.597 15:50:57 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:17.597 15:50:57 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:08:17.597 15:50:57 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:08:17.597 15:50:57 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:08:17.597 15:50:57 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:08:17.597 15:50:57 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:08:17.597 15:50:57 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:08:17.597 15:50:57 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:08:17.597 15:50:57 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:08:17.597 15:50:57 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:08:17.597 15:50:57 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:08:17.597 15:50:57 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:08:17.597 15:50:57 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:17.597 15:50:57 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:08:17.597 15:50:57 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:08:17.597 15:50:57 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:17.597 15:50:57 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:17.597 15:50:57 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:17.597 15:50:57 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:17.597 15:50:57 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:17.597 15:50:57 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.597 15:50:57 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:17.597 15:50:57 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.597 15:50:57 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:17.597 15:50:57 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:17.597 15:50:57 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:17.597 15:50:57 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:17.597 15:50:57 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:17.597 15:50:57 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:17.597 15:50:57 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:17.597 15:50:57 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:17.597 #define SPDK_CONFIG_H 00:08:17.597 #define SPDK_CONFIG_APPS 1 00:08:17.597 #define SPDK_CONFIG_ARCH native 00:08:17.597 #define SPDK_CONFIG_ASAN 1 00:08:17.597 #undef SPDK_CONFIG_AVAHI 00:08:17.597 #undef SPDK_CONFIG_CET 00:08:17.597 #define SPDK_CONFIG_COVERAGE 1 00:08:17.597 #define SPDK_CONFIG_CROSS_PREFIX 00:08:17.597 #undef SPDK_CONFIG_CRYPTO 00:08:17.597 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:17.597 #undef SPDK_CONFIG_CUSTOMOCF 00:08:17.597 #undef SPDK_CONFIG_DAOS 00:08:17.597 #define SPDK_CONFIG_DAOS_DIR 00:08:17.597 #define SPDK_CONFIG_DEBUG 1 00:08:17.597 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:17.597 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:17.597 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:17.597 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:17.597 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:17.597 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:17.597 #define SPDK_CONFIG_EXAMPLES 1 00:08:17.597 #undef SPDK_CONFIG_FC 00:08:17.597 #define SPDK_CONFIG_FC_PATH 00:08:17.597 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:17.597 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:17.597 #undef SPDK_CONFIG_FUSE 00:08:17.597 #undef SPDK_CONFIG_FUZZER 00:08:17.597 #define SPDK_CONFIG_FUZZER_LIB 00:08:17.597 #undef SPDK_CONFIG_GOLANG 00:08:17.597 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:17.597 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:17.597 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:17.597 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:08:17.597 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:17.597 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:17.597 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:17.597 #define SPDK_CONFIG_IDXD 1 00:08:17.597 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:17.597 #undef SPDK_CONFIG_IPSEC_MB 00:08:17.597 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:17.597 #define SPDK_CONFIG_ISAL 1 00:08:17.597 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:17.597 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:17.597 #define SPDK_CONFIG_LIBDIR 00:08:17.597 #undef SPDK_CONFIG_LTO 00:08:17.597 #define SPDK_CONFIG_MAX_LCORES 00:08:17.597 #define SPDK_CONFIG_NVME_CUSE 1 00:08:17.597 #undef SPDK_CONFIG_OCF 00:08:17.597 #define SPDK_CONFIG_OCF_PATH 00:08:17.597 #define SPDK_CONFIG_OPENSSL_PATH 00:08:17.597 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:17.597 #define SPDK_CONFIG_PGO_DIR 00:08:17.597 #undef SPDK_CONFIG_PGO_USE 00:08:17.597 #define SPDK_CONFIG_PREFIX /usr/local 00:08:17.597 #undef SPDK_CONFIG_RAID5F 00:08:17.597 #undef SPDK_CONFIG_RBD 00:08:17.597 #define SPDK_CONFIG_RDMA 1 00:08:17.597 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:17.597 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:17.597 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:17.597 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:17.597 #define SPDK_CONFIG_SHARED 1 00:08:17.597 #undef SPDK_CONFIG_SMA 00:08:17.597 #define SPDK_CONFIG_TESTS 1 00:08:17.597 #undef SPDK_CONFIG_TSAN 00:08:17.597 #define SPDK_CONFIG_UBLK 1 00:08:17.597 #define SPDK_CONFIG_UBSAN 1 00:08:17.597 #undef SPDK_CONFIG_UNIT_TESTS 00:08:17.597 #undef SPDK_CONFIG_URING 00:08:17.597 #define SPDK_CONFIG_URING_PATH 00:08:17.597 #undef SPDK_CONFIG_URING_ZNS 00:08:17.597 #undef SPDK_CONFIG_USDT 00:08:17.597 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:17.597 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:17.597 #define SPDK_CONFIG_VFIO_USER 1 00:08:17.597 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:17.597 #define SPDK_CONFIG_VHOST 1 00:08:17.597 #define SPDK_CONFIG_VIRTIO 1 00:08:17.597 #undef SPDK_CONFIG_VTUNE 00:08:17.597 #define SPDK_CONFIG_VTUNE_DIR 00:08:17.597 #define SPDK_CONFIG_WERROR 1 00:08:17.597 #define SPDK_CONFIG_WPDK_DIR 00:08:17.597 #undef SPDK_CONFIG_XNVME 00:08:17.597 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:17.597 15:50:57 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:17.597 15:50:57 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.597 15:50:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.597 15:50:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.597 15:50:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.597 15:50:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.597 15:50:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.597 15:50:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.597 15:50:57 -- paths/export.sh@5 -- # export PATH 00:08:17.597 15:50:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.597 15:50:57 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:17.859 15:50:57 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:17.859 15:50:57 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:17.859 15:50:57 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:17.859 15:50:57 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:17.859 15:50:57 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:17.859 15:50:57 -- pm/common@67 -- # TEST_TAG=N/A 00:08:17.859 15:50:57 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:17.859 15:50:57 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:17.859 15:50:57 -- pm/common@71 -- # uname -s 00:08:17.859 15:50:57 -- pm/common@71 -- # PM_OS=Linux 00:08:17.859 15:50:57 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:17.859 15:50:57 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:08:17.859 15:50:57 -- pm/common@76 -- # [[ Linux == Linux ]] 00:08:17.859 15:50:57 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:08:17.859 15:50:57 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:08:17.859 15:50:57 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:17.859 15:50:57 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:17.859 15:50:57 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:08:17.859 15:50:57 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:08:17.859 15:50:57 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:17.859 15:50:57 -- common/autotest_common.sh@57 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:17.859 15:50:57 -- common/autotest_common.sh@61 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:17.859 15:50:57 -- common/autotest_common.sh@63 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:17.859 15:50:57 -- common/autotest_common.sh@65 -- # : 1 00:08:17.859 15:50:57 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:17.859 15:50:57 -- common/autotest_common.sh@67 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:17.859 15:50:57 -- common/autotest_common.sh@69 -- # : 00:08:17.859 15:50:57 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:17.859 15:50:57 -- common/autotest_common.sh@71 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:17.859 15:50:57 -- common/autotest_common.sh@73 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:17.859 15:50:57 -- common/autotest_common.sh@75 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:17.859 15:50:57 -- common/autotest_common.sh@77 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:17.859 15:50:57 -- common/autotest_common.sh@79 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:17.859 15:50:57 -- common/autotest_common.sh@81 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:17.859 15:50:57 -- common/autotest_common.sh@83 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:17.859 15:50:57 -- common/autotest_common.sh@85 -- # : 1 00:08:17.859 15:50:57 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:17.859 15:50:57 -- common/autotest_common.sh@87 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:17.859 15:50:57 -- common/autotest_common.sh@89 -- # : 0 00:08:17.859 15:50:57 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:17.859 15:50:57 -- common/autotest_common.sh@91 -- # : 1 00:08:17.860 15:50:57 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:17.860 15:50:57 -- common/autotest_common.sh@93 -- # : 1 00:08:17.860 15:50:57 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:17.860 15:50:57 -- common/autotest_common.sh@95 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:17.860 15:50:57 -- common/autotest_common.sh@97 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:17.860 15:50:57 -- common/autotest_common.sh@99 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:17.860 15:50:57 -- common/autotest_common.sh@101 -- # : tcp 00:08:17.860 15:50:57 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:17.860 15:50:57 -- common/autotest_common.sh@103 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:17.860 15:50:57 -- common/autotest_common.sh@105 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:17.860 15:50:57 -- common/autotest_common.sh@107 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:17.860 15:50:57 -- common/autotest_common.sh@109 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:17.860 15:50:57 -- common/autotest_common.sh@111 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:17.860 15:50:57 -- common/autotest_common.sh@113 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:17.860 15:50:57 -- common/autotest_common.sh@115 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:17.860 15:50:57 -- common/autotest_common.sh@117 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:17.860 15:50:57 -- common/autotest_common.sh@119 -- # : 1 00:08:17.860 15:50:57 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:17.860 15:50:57 -- common/autotest_common.sh@121 -- # : 1 00:08:17.860 15:50:57 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:17.860 15:50:57 -- common/autotest_common.sh@123 -- # : 00:08:17.860 15:50:57 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:17.860 15:50:57 -- common/autotest_common.sh@125 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:17.860 15:50:57 -- common/autotest_common.sh@127 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:17.860 15:50:57 -- common/autotest_common.sh@129 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:17.860 15:50:57 -- common/autotest_common.sh@131 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:17.860 15:50:57 -- common/autotest_common.sh@133 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:17.860 15:50:57 -- common/autotest_common.sh@135 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:17.860 15:50:57 -- common/autotest_common.sh@137 -- # : 00:08:17.860 15:50:57 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:17.860 15:50:57 -- common/autotest_common.sh@139 -- # : true 00:08:17.860 15:50:57 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:17.860 15:50:57 -- common/autotest_common.sh@141 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:17.860 15:50:57 -- common/autotest_common.sh@143 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:17.860 15:50:57 -- common/autotest_common.sh@145 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:17.860 15:50:57 -- common/autotest_common.sh@147 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:17.860 15:50:57 -- common/autotest_common.sh@149 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:17.860 15:50:57 -- common/autotest_common.sh@151 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:17.860 15:50:57 -- common/autotest_common.sh@153 -- # : e810 00:08:17.860 15:50:57 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:17.860 15:50:57 -- common/autotest_common.sh@155 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:17.860 15:50:57 -- common/autotest_common.sh@157 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:17.860 15:50:57 -- common/autotest_common.sh@159 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:17.860 15:50:57 -- common/autotest_common.sh@161 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:17.860 15:50:57 -- common/autotest_common.sh@163 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:17.860 15:50:57 -- common/autotest_common.sh@166 -- # : 00:08:17.860 15:50:57 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:17.860 15:50:57 -- common/autotest_common.sh@168 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:17.860 15:50:57 -- common/autotest_common.sh@170 -- # : 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:17.860 15:50:57 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:17.860 15:50:57 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:17.860 15:50:57 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:17.860 15:50:57 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:17.860 15:50:57 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.860 15:50:57 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.860 15:50:57 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.860 15:50:57 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.860 15:50:57 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:17.860 15:50:57 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:17.860 15:50:57 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:17.860 15:50:57 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:17.860 15:50:57 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:17.860 15:50:57 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:17.860 15:50:57 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:17.860 15:50:57 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:17.860 15:50:57 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:17.860 15:50:57 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:17.860 15:50:57 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:17.860 15:50:57 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:17.860 15:50:57 -- common/autotest_common.sh@199 -- # cat 00:08:17.860 15:50:57 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:08:17.860 15:50:57 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:17.860 15:50:57 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:17.860 15:50:57 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:17.860 15:50:57 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:17.860 15:50:57 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:08:17.860 15:50:57 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:08:17.860 15:50:57 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.860 15:50:57 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.860 15:50:57 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.860 15:50:57 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.860 15:50:57 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.860 15:50:57 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.860 15:50:57 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.860 15:50:57 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.860 15:50:57 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:17.860 15:50:57 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:17.860 15:50:57 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.860 15:50:57 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.860 15:50:57 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:08:17.860 15:50:57 -- common/autotest_common.sh@252 -- # export valgrind= 00:08:17.860 15:50:57 -- common/autotest_common.sh@252 -- # valgrind= 00:08:17.860 15:50:57 -- common/autotest_common.sh@258 -- # uname -s 00:08:17.860 15:50:57 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:08:17.860 15:50:57 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:08:17.860 15:50:57 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:08:17.860 15:50:57 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:08:17.860 15:50:57 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@268 -- # MAKE=make 00:08:17.860 15:50:57 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j96 00:08:17.860 15:50:57 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:08:17.860 15:50:57 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:08:17.860 15:50:57 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:08:17.860 15:50:57 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:08:17.860 15:50:57 -- common/autotest_common.sh@289 -- # for i in "$@" 00:08:17.860 15:50:57 -- common/autotest_common.sh@290 -- # case "$i" in 00:08:17.860 15:50:57 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:08:17.860 15:50:57 -- common/autotest_common.sh@307 -- # [[ -z 2302614 ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@307 -- # kill -0 2302614 00:08:17.860 15:50:57 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:08:17.860 15:50:57 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:08:17.860 15:50:57 -- common/autotest_common.sh@320 -- # local mount target_dir 00:08:17.860 15:50:57 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:08:17.860 15:50:57 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:08:17.860 15:50:57 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:08:17.860 15:50:57 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:08:17.860 15:50:57 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.47KlLu 00:08:17.860 15:50:57 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:17.860 15:50:57 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.47KlLu/tests/target /tmp/spdk.47KlLu 00:08:17.860 15:50:57 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:08:17.860 15:50:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:17.860 15:50:57 -- common/autotest_common.sh@316 -- # df -T 00:08:17.860 15:50:57 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:08:17.860 15:50:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:08:17.860 15:50:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=996753408 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:08:17.860 15:50:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287676416 00:08:17.860 15:50:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=186097930240 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=195974328320 00:08:17.860 15:50:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=9876398080 00:08:17.860 15:50:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=97984548864 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987162112 00:08:17.860 15:50:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:08:17.860 15:50:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=39185489920 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=39194865664 00:08:17.860 15:50:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=9375744 00:08:17.860 15:50:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=97986211840 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987166208 00:08:17.860 15:50:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=954368 00:08:17.860 15:50:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=19597426688 00:08:17.860 15:50:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19597430784 00:08:17.860 15:50:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:08:17.860 15:50:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:08:17.860 15:50:57 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:08:17.860 * Looking for test storage... 00:08:17.860 15:50:57 -- common/autotest_common.sh@357 -- # local target_space new_size 00:08:17.860 15:50:57 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:08:17.860 15:50:57 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.860 15:50:57 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:17.860 15:50:57 -- common/autotest_common.sh@361 -- # mount=/ 00:08:17.860 15:50:57 -- common/autotest_common.sh@363 -- # target_space=186097930240 00:08:17.860 15:50:57 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:08:17.860 15:50:57 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:08:17.860 15:50:57 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@370 -- # new_size=12090990592 00:08:17.860 15:50:57 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:17.860 15:50:57 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.860 15:50:57 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.860 15:50:57 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.860 15:50:57 -- common/autotest_common.sh@378 -- # return 0 00:08:17.860 15:50:57 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:08:17.860 15:50:57 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:08:17.860 15:50:57 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:17.860 15:50:57 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:17.860 15:50:57 -- common/autotest_common.sh@1673 -- # true 00:08:17.860 15:50:57 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:08:17.860 15:50:57 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:17.860 15:50:57 -- common/autotest_common.sh@27 -- # exec 00:08:17.860 15:50:57 -- common/autotest_common.sh@29 -- # exec 00:08:17.860 15:50:57 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:17.860 15:50:57 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:17.860 15:50:57 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:17.860 15:50:57 -- common/autotest_common.sh@18 -- # set -x 00:08:17.860 15:50:57 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.860 15:50:57 -- nvmf/common.sh@7 -- # uname -s 00:08:17.860 15:50:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.860 15:50:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.860 15:50:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.860 15:50:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.860 15:50:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.860 15:50:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.860 15:50:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.860 15:50:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.860 15:50:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.860 15:50:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.861 15:50:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:17.861 15:50:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:17.861 15:50:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.861 15:50:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.861 15:50:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.861 15:50:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.861 15:50:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.861 15:50:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.861 15:50:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.861 15:50:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.861 15:50:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.861 15:50:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.861 15:50:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.861 15:50:57 -- paths/export.sh@5 -- # export PATH 00:08:17.861 15:50:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.861 15:50:57 -- nvmf/common.sh@47 -- # : 0 00:08:17.861 15:50:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.861 15:50:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.861 15:50:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.861 15:50:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.861 15:50:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.861 15:50:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.861 15:50:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.861 15:50:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.861 15:50:57 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:17.861 15:50:57 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:17.861 15:50:57 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:17.861 15:50:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:17.861 15:50:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.861 15:50:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:17.861 15:50:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:17.861 15:50:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:17.861 15:50:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.861 15:50:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.861 15:50:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.861 15:50:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:17.861 15:50:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:17.861 15:50:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.861 15:50:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.144 15:51:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:23.144 15:51:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.144 15:51:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.144 15:51:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.144 15:51:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.144 15:51:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.144 15:51:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.144 15:51:02 -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.144 15:51:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.144 15:51:02 -- nvmf/common.sh@296 -- # e810=() 00:08:23.144 15:51:02 -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.144 15:51:02 -- nvmf/common.sh@297 -- # x722=() 00:08:23.144 15:51:02 -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.144 15:51:02 -- nvmf/common.sh@298 -- # mlx=() 00:08:23.144 15:51:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.144 15:51:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.144 15:51:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.144 15:51:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.144 15:51:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.144 15:51:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.144 15:51:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.144 15:51:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.144 15:51:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.144 15:51:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:23.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:23.144 15:51:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.144 15:51:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.144 15:51:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.145 15:51:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:23.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:23.145 15:51:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.145 15:51:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.145 15:51:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.145 15:51:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:23.145 15:51:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.145 15:51:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:23.145 Found net devices under 0000:86:00.0: cvl_0_0 00:08:23.145 15:51:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.145 15:51:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.145 15:51:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.145 15:51:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:23.145 15:51:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.145 15:51:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:23.145 Found net devices under 0000:86:00.1: cvl_0_1 00:08:23.145 15:51:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.145 15:51:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:23.145 15:51:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:23.145 15:51:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:23.145 15:51:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.145 15:51:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.145 15:51:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.145 15:51:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.145 15:51:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.145 15:51:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.145 15:51:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.145 15:51:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.145 15:51:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.145 15:51:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.145 15:51:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.145 15:51:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.145 15:51:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.145 15:51:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.145 15:51:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.145 15:51:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.145 15:51:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.145 15:51:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.145 15:51:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.145 15:51:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:08:23.145 00:08:23.145 --- 10.0.0.2 ping statistics --- 00:08:23.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.145 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:08:23.145 15:51:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:08:23.145 00:08:23.145 --- 10.0.0.1 ping statistics --- 00:08:23.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.145 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:08:23.145 15:51:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.145 15:51:02 -- nvmf/common.sh@411 -- # return 0 00:08:23.145 15:51:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:23.145 15:51:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.145 15:51:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:23.145 15:51:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.145 15:51:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:23.145 15:51:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:23.404 15:51:02 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:23.404 15:51:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:23.404 15:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.404 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:08:23.404 ************************************ 00:08:23.404 START TEST nvmf_filesystem_no_in_capsule 00:08:23.404 ************************************ 00:08:23.404 15:51:02 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:08:23.404 15:51:02 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:23.404 15:51:02 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:23.404 15:51:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:23.404 15:51:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:23.404 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:08:23.404 15:51:02 -- nvmf/common.sh@470 -- # nvmfpid=2305767 00:08:23.404 15:51:02 -- nvmf/common.sh@471 -- # waitforlisten 2305767 00:08:23.404 15:51:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.404 15:51:02 -- common/autotest_common.sh@817 -- # '[' -z 2305767 ']' 00:08:23.404 15:51:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.404 15:51:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:23.404 15:51:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.404 15:51:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:23.404 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:08:23.404 [2024-04-26 15:51:03.026823] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:23.404 [2024-04-26 15:51:03.026909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.404 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.662 [2024-04-26 15:51:03.136294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.921 [2024-04-26 15:51:03.371756] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.921 [2024-04-26 15:51:03.371801] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.921 [2024-04-26 15:51:03.371811] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.921 [2024-04-26 15:51:03.371822] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.921 [2024-04-26 15:51:03.371829] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.921 [2024-04-26 15:51:03.371902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.921 [2024-04-26 15:51:03.371975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.921 [2024-04-26 15:51:03.372038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.921 [2024-04-26 15:51:03.372046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.180 15:51:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:24.180 15:51:03 -- common/autotest_common.sh@850 -- # return 0 00:08:24.180 15:51:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:24.180 15:51:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:24.180 15:51:03 -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 15:51:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.180 15:51:03 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:24.180 15:51:03 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:24.180 15:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.180 15:51:03 -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 [2024-04-26 15:51:03.843061] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.180 15:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.180 15:51:03 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:24.180 15:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.180 15:51:03 -- common/autotest_common.sh@10 -- # set +x 00:08:25.153 Malloc1 00:08:25.153 15:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.153 15:51:04 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:25.153 15:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.153 15:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:25.153 15:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.153 15:51:04 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:25.153 15:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.153 15:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:25.153 15:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.153 15:51:04 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.153 15:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.153 15:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:25.153 [2024-04-26 15:51:04.553345] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.153 15:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.153 15:51:04 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:25.153 15:51:04 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:25.153 15:51:04 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:25.153 15:51:04 -- common/autotest_common.sh@1366 -- # local bs 00:08:25.153 15:51:04 -- common/autotest_common.sh@1367 -- # local nb 00:08:25.153 15:51:04 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:25.153 15:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.153 15:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:25.153 15:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.153 15:51:04 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:25.153 { 00:08:25.153 "name": "Malloc1", 00:08:25.153 "aliases": [ 00:08:25.153 "834e356c-ddea-4c49-af01-84d3084a5004" 00:08:25.153 ], 00:08:25.153 "product_name": "Malloc disk", 00:08:25.153 "block_size": 512, 00:08:25.153 "num_blocks": 1048576, 00:08:25.153 "uuid": "834e356c-ddea-4c49-af01-84d3084a5004", 00:08:25.153 "assigned_rate_limits": { 00:08:25.153 "rw_ios_per_sec": 0, 00:08:25.153 "rw_mbytes_per_sec": 0, 00:08:25.153 "r_mbytes_per_sec": 0, 00:08:25.153 "w_mbytes_per_sec": 0 00:08:25.153 }, 00:08:25.153 "claimed": true, 00:08:25.153 "claim_type": "exclusive_write", 00:08:25.153 "zoned": false, 00:08:25.153 "supported_io_types": { 00:08:25.153 "read": true, 00:08:25.153 "write": true, 00:08:25.153 "unmap": true, 00:08:25.153 "write_zeroes": true, 00:08:25.153 "flush": true, 00:08:25.153 "reset": true, 00:08:25.153 "compare": false, 00:08:25.153 "compare_and_write": false, 00:08:25.153 "abort": true, 00:08:25.153 "nvme_admin": false, 00:08:25.153 "nvme_io": false 00:08:25.153 }, 00:08:25.153 "memory_domains": [ 00:08:25.153 { 00:08:25.153 "dma_device_id": "system", 00:08:25.153 "dma_device_type": 1 00:08:25.153 }, 00:08:25.153 { 00:08:25.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.153 "dma_device_type": 2 00:08:25.153 } 00:08:25.153 ], 00:08:25.153 "driver_specific": {} 00:08:25.153 } 00:08:25.153 ]' 00:08:25.153 15:51:04 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:25.153 15:51:04 -- common/autotest_common.sh@1369 -- # bs=512 00:08:25.153 15:51:04 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:25.153 15:51:04 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:25.153 15:51:04 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:25.153 15:51:04 -- common/autotest_common.sh@1374 -- # echo 512 00:08:25.153 15:51:04 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:25.153 15:51:04 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:26.136 15:51:05 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:26.136 15:51:05 -- common/autotest_common.sh@1184 -- # local i=0 00:08:26.136 15:51:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:26.136 15:51:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:26.136 15:51:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:28.668 15:51:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:28.668 15:51:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:28.668 15:51:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:28.668 15:51:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:28.668 15:51:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:28.668 15:51:07 -- common/autotest_common.sh@1194 -- # return 0 00:08:28.668 15:51:07 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:28.668 15:51:07 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:28.668 15:51:07 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:28.668 15:51:07 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:28.668 15:51:07 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:28.668 15:51:07 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:28.668 15:51:07 -- setup/common.sh@80 -- # echo 536870912 00:08:28.668 15:51:07 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:28.668 15:51:07 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:28.668 15:51:07 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:28.668 15:51:07 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:28.668 15:51:08 -- target/filesystem.sh@69 -- # partprobe 00:08:29.235 15:51:08 -- target/filesystem.sh@70 -- # sleep 1 00:08:30.171 15:51:09 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:30.171 15:51:09 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:30.171 15:51:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:30.171 15:51:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.172 15:51:09 -- common/autotest_common.sh@10 -- # set +x 00:08:30.431 ************************************ 00:08:30.431 START TEST filesystem_ext4 00:08:30.431 ************************************ 00:08:30.431 15:51:09 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:30.431 15:51:09 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:30.431 15:51:09 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:30.431 15:51:09 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:30.431 15:51:09 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:30.431 15:51:09 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:30.431 15:51:09 -- common/autotest_common.sh@914 -- # local i=0 00:08:30.431 15:51:09 -- common/autotest_common.sh@915 -- # local force 00:08:30.431 15:51:09 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:30.431 15:51:09 -- common/autotest_common.sh@918 -- # force=-F 00:08:30.431 15:51:09 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:30.431 mke2fs 1.46.5 (30-Dec-2021) 00:08:30.431 Discarding device blocks: 0/522240 done 00:08:30.431 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:30.431 Filesystem UUID: beda9b88-b65b-45c5-b4b1-ab85ccf2c5c6 00:08:30.431 Superblock backups stored on blocks: 00:08:30.431 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:30.431 00:08:30.431 Allocating group tables: 0/64 done 00:08:30.431 Writing inode tables: 0/64 done 00:08:30.996 Creating journal (8192 blocks): done 00:08:31.821 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:08:31.821 00:08:31.821 15:51:11 -- common/autotest_common.sh@931 -- # return 0 00:08:31.821 15:51:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.821 15:51:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.079 15:51:11 -- target/filesystem.sh@25 -- # sync 00:08:32.079 15:51:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.079 15:51:11 -- target/filesystem.sh@27 -- # sync 00:08:32.079 15:51:11 -- target/filesystem.sh@29 -- # i=0 00:08:32.079 15:51:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.079 15:51:11 -- target/filesystem.sh@37 -- # kill -0 2305767 00:08:32.079 15:51:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.079 15:51:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.079 15:51:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.079 15:51:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.079 00:08:32.079 real 0m1.671s 00:08:32.079 user 0m0.028s 00:08:32.079 sys 0m0.064s 00:08:32.079 15:51:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.079 15:51:11 -- common/autotest_common.sh@10 -- # set +x 00:08:32.079 ************************************ 00:08:32.079 END TEST filesystem_ext4 00:08:32.079 ************************************ 00:08:32.079 15:51:11 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:32.079 15:51:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:32.079 15:51:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.079 15:51:11 -- common/autotest_common.sh@10 -- # set +x 00:08:32.337 ************************************ 00:08:32.337 START TEST filesystem_btrfs 00:08:32.337 ************************************ 00:08:32.337 15:51:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:32.337 15:51:11 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:32.337 15:51:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.337 15:51:11 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:32.337 15:51:11 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:32.337 15:51:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:32.337 15:51:11 -- common/autotest_common.sh@914 -- # local i=0 00:08:32.337 15:51:11 -- common/autotest_common.sh@915 -- # local force 00:08:32.338 15:51:11 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:32.338 15:51:11 -- common/autotest_common.sh@920 -- # force=-f 00:08:32.338 15:51:11 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:32.596 btrfs-progs v6.6.2 00:08:32.596 See https://btrfs.readthedocs.io for more information. 00:08:32.596 00:08:32.596 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:32.596 NOTE: several default settings have changed in version 5.15, please make sure 00:08:32.596 this does not affect your deployments: 00:08:32.596 - DUP for metadata (-m dup) 00:08:32.596 - enabled no-holes (-O no-holes) 00:08:32.596 - enabled free-space-tree (-R free-space-tree) 00:08:32.596 00:08:32.596 Label: (null) 00:08:32.596 UUID: a6430857-e3eb-475e-8489-9cb80fc72ccc 00:08:32.596 Node size: 16384 00:08:32.596 Sector size: 4096 00:08:32.596 Filesystem size: 510.00MiB 00:08:32.596 Block group profiles: 00:08:32.596 Data: single 8.00MiB 00:08:32.596 Metadata: DUP 32.00MiB 00:08:32.596 System: DUP 8.00MiB 00:08:32.596 SSD detected: yes 00:08:32.596 Zoned device: no 00:08:32.596 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:32.596 Runtime features: free-space-tree 00:08:32.596 Checksum: crc32c 00:08:32.596 Number of devices: 1 00:08:32.596 Devices: 00:08:32.596 ID SIZE PATH 00:08:32.596 1 510.00MiB /dev/nvme0n1p1 00:08:32.596 00:08:32.596 15:51:12 -- common/autotest_common.sh@931 -- # return 0 00:08:32.596 15:51:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.854 15:51:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.855 15:51:12 -- target/filesystem.sh@25 -- # sync 00:08:32.855 15:51:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.855 15:51:12 -- target/filesystem.sh@27 -- # sync 00:08:32.855 15:51:12 -- target/filesystem.sh@29 -- # i=0 00:08:32.855 15:51:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.855 15:51:12 -- target/filesystem.sh@37 -- # kill -0 2305767 00:08:32.855 15:51:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.855 15:51:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.855 15:51:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.855 15:51:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.855 00:08:32.855 real 0m0.725s 00:08:32.855 user 0m0.028s 00:08:32.855 sys 0m0.121s 00:08:32.855 15:51:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.855 15:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:32.855 ************************************ 00:08:32.855 END TEST filesystem_btrfs 00:08:32.855 ************************************ 00:08:32.855 15:51:12 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:32.855 15:51:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:33.113 15:51:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.113 15:51:12 -- common/autotest_common.sh@10 -- # set +x 00:08:33.113 ************************************ 00:08:33.113 START TEST filesystem_xfs 00:08:33.113 ************************************ 00:08:33.113 15:51:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:33.113 15:51:12 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:33.113 15:51:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:33.113 15:51:12 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:33.113 15:51:12 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:33.113 15:51:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:33.113 15:51:12 -- common/autotest_common.sh@914 -- # local i=0 00:08:33.113 15:51:12 -- common/autotest_common.sh@915 -- # local force 00:08:33.113 15:51:12 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:33.113 15:51:12 -- common/autotest_common.sh@920 -- # force=-f 00:08:33.113 15:51:12 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:33.113 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:33.113 = sectsz=512 attr=2, projid32bit=1 00:08:33.113 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:33.113 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:33.113 data = bsize=4096 blocks=130560, imaxpct=25 00:08:33.113 = sunit=0 swidth=0 blks 00:08:33.113 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:33.113 log =internal log bsize=4096 blocks=16384, version=2 00:08:33.113 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:33.113 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:34.069 Discarding blocks...Done. 00:08:34.069 15:51:13 -- common/autotest_common.sh@931 -- # return 0 00:08:34.069 15:51:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.968 15:51:15 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.968 15:51:15 -- target/filesystem.sh@25 -- # sync 00:08:35.968 15:51:15 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.968 15:51:15 -- target/filesystem.sh@27 -- # sync 00:08:35.968 15:51:15 -- target/filesystem.sh@29 -- # i=0 00:08:35.968 15:51:15 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.968 15:51:15 -- target/filesystem.sh@37 -- # kill -0 2305767 00:08:35.968 15:51:15 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.968 15:51:15 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.226 15:51:15 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.226 15:51:15 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.226 00:08:36.226 real 0m2.992s 00:08:36.226 user 0m0.029s 00:08:36.226 sys 0m0.066s 00:08:36.226 15:51:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:36.226 15:51:15 -- common/autotest_common.sh@10 -- # set +x 00:08:36.226 ************************************ 00:08:36.226 END TEST filesystem_xfs 00:08:36.226 ************************************ 00:08:36.226 15:51:15 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:36.485 15:51:16 -- target/filesystem.sh@93 -- # sync 00:08:36.485 15:51:16 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:36.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.485 15:51:16 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:36.485 15:51:16 -- common/autotest_common.sh@1205 -- # local i=0 00:08:36.485 15:51:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:36.485 15:51:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.744 15:51:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:36.744 15:51:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.744 15:51:16 -- common/autotest_common.sh@1217 -- # return 0 00:08:36.744 15:51:16 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.744 15:51:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.744 15:51:16 -- common/autotest_common.sh@10 -- # set +x 00:08:36.744 15:51:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.744 15:51:16 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:36.744 15:51:16 -- target/filesystem.sh@101 -- # killprocess 2305767 00:08:36.744 15:51:16 -- common/autotest_common.sh@936 -- # '[' -z 2305767 ']' 00:08:36.744 15:51:16 -- common/autotest_common.sh@940 -- # kill -0 2305767 00:08:36.744 15:51:16 -- common/autotest_common.sh@941 -- # uname 00:08:36.744 15:51:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:36.744 15:51:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2305767 00:08:36.744 15:51:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:36.744 15:51:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:36.744 15:51:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2305767' 00:08:36.744 killing process with pid 2305767 00:08:36.744 15:51:16 -- common/autotest_common.sh@955 -- # kill 2305767 00:08:36.744 15:51:16 -- common/autotest_common.sh@960 -- # wait 2305767 00:08:39.276 15:51:18 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:39.276 00:08:39.276 real 0m15.994s 00:08:39.276 user 1m0.664s 00:08:39.276 sys 0m1.488s 00:08:39.276 15:51:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:39.276 15:51:18 -- common/autotest_common.sh@10 -- # set +x 00:08:39.276 ************************************ 00:08:39.276 END TEST nvmf_filesystem_no_in_capsule 00:08:39.276 ************************************ 00:08:39.534 15:51:18 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:39.534 15:51:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:39.534 15:51:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.534 15:51:18 -- common/autotest_common.sh@10 -- # set +x 00:08:39.534 ************************************ 00:08:39.534 START TEST nvmf_filesystem_in_capsule 00:08:39.534 ************************************ 00:08:39.534 15:51:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:08:39.534 15:51:19 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:39.534 15:51:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:39.534 15:51:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:39.534 15:51:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:39.534 15:51:19 -- common/autotest_common.sh@10 -- # set +x 00:08:39.534 15:51:19 -- nvmf/common.sh@470 -- # nvmfpid=2309159 00:08:39.534 15:51:19 -- nvmf/common.sh@471 -- # waitforlisten 2309159 00:08:39.534 15:51:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.534 15:51:19 -- common/autotest_common.sh@817 -- # '[' -z 2309159 ']' 00:08:39.534 15:51:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.534 15:51:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:39.534 15:51:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.534 15:51:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:39.534 15:51:19 -- common/autotest_common.sh@10 -- # set +x 00:08:39.534 [2024-04-26 15:51:19.196289] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:08:39.534 [2024-04-26 15:51:19.196401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.793 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.793 [2024-04-26 15:51:19.307454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.051 [2024-04-26 15:51:19.525109] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.052 [2024-04-26 15:51:19.525153] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.052 [2024-04-26 15:51:19.525163] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.052 [2024-04-26 15:51:19.525172] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.052 [2024-04-26 15:51:19.525179] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.052 [2024-04-26 15:51:19.525255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.052 [2024-04-26 15:51:19.525331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.052 [2024-04-26 15:51:19.525388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.052 [2024-04-26 15:51:19.525398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.310 15:51:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:40.310 15:51:19 -- common/autotest_common.sh@850 -- # return 0 00:08:40.310 15:51:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:40.310 15:51:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:40.310 15:51:19 -- common/autotest_common.sh@10 -- # set +x 00:08:40.568 15:51:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.568 15:51:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:40.568 15:51:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:40.568 15:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:40.568 15:51:20 -- common/autotest_common.sh@10 -- # set +x 00:08:40.568 [2024-04-26 15:51:20.025688] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.568 15:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:40.568 15:51:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:40.568 15:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:40.568 15:51:20 -- common/autotest_common.sh@10 -- # set +x 00:08:41.135 Malloc1 00:08:41.135 15:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.135 15:51:20 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:41.135 15:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.135 15:51:20 -- common/autotest_common.sh@10 -- # set +x 00:08:41.135 15:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.135 15:51:20 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:41.135 15:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.135 15:51:20 -- common/autotest_common.sh@10 -- # set +x 00:08:41.135 15:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.135 15:51:20 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.135 15:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.135 15:51:20 -- common/autotest_common.sh@10 -- # set +x 00:08:41.135 [2024-04-26 15:51:20.716600] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.135 15:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.135 15:51:20 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:41.135 15:51:20 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:08:41.135 15:51:20 -- common/autotest_common.sh@1365 -- # local bdev_info 00:08:41.135 15:51:20 -- common/autotest_common.sh@1366 -- # local bs 00:08:41.135 15:51:20 -- common/autotest_common.sh@1367 -- # local nb 00:08:41.135 15:51:20 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:41.135 15:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.135 15:51:20 -- common/autotest_common.sh@10 -- # set +x 00:08:41.135 15:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.135 15:51:20 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:08:41.135 { 00:08:41.135 "name": "Malloc1", 00:08:41.135 "aliases": [ 00:08:41.135 "590ee16b-0d07-4df8-aadd-0a96dd4df0ff" 00:08:41.135 ], 00:08:41.135 "product_name": "Malloc disk", 00:08:41.135 "block_size": 512, 00:08:41.135 "num_blocks": 1048576, 00:08:41.135 "uuid": "590ee16b-0d07-4df8-aadd-0a96dd4df0ff", 00:08:41.135 "assigned_rate_limits": { 00:08:41.135 "rw_ios_per_sec": 0, 00:08:41.135 "rw_mbytes_per_sec": 0, 00:08:41.135 "r_mbytes_per_sec": 0, 00:08:41.135 "w_mbytes_per_sec": 0 00:08:41.135 }, 00:08:41.135 "claimed": true, 00:08:41.135 "claim_type": "exclusive_write", 00:08:41.135 "zoned": false, 00:08:41.135 "supported_io_types": { 00:08:41.135 "read": true, 00:08:41.135 "write": true, 00:08:41.135 "unmap": true, 00:08:41.135 "write_zeroes": true, 00:08:41.135 "flush": true, 00:08:41.135 "reset": true, 00:08:41.135 "compare": false, 00:08:41.135 "compare_and_write": false, 00:08:41.135 "abort": true, 00:08:41.135 "nvme_admin": false, 00:08:41.135 "nvme_io": false 00:08:41.135 }, 00:08:41.135 "memory_domains": [ 00:08:41.135 { 00:08:41.135 "dma_device_id": "system", 00:08:41.135 "dma_device_type": 1 00:08:41.135 }, 00:08:41.135 { 00:08:41.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.135 "dma_device_type": 2 00:08:41.135 } 00:08:41.135 ], 00:08:41.135 "driver_specific": {} 00:08:41.135 } 00:08:41.135 ]' 00:08:41.135 15:51:20 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:08:41.135 15:51:20 -- common/autotest_common.sh@1369 -- # bs=512 00:08:41.135 15:51:20 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:08:41.394 15:51:20 -- common/autotest_common.sh@1370 -- # nb=1048576 00:08:41.394 15:51:20 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:08:41.394 15:51:20 -- common/autotest_common.sh@1374 -- # echo 512 00:08:41.394 15:51:20 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:41.394 15:51:20 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.772 15:51:22 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.772 15:51:22 -- common/autotest_common.sh@1184 -- # local i=0 00:08:42.772 15:51:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.772 15:51:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:42.772 15:51:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:44.675 15:51:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:44.675 15:51:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:44.675 15:51:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.675 15:51:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:44.675 15:51:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.675 15:51:24 -- common/autotest_common.sh@1194 -- # return 0 00:08:44.675 15:51:24 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:44.675 15:51:24 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:44.675 15:51:24 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:44.675 15:51:24 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:44.675 15:51:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:44.675 15:51:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:44.675 15:51:24 -- setup/common.sh@80 -- # echo 536870912 00:08:44.675 15:51:24 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:44.675 15:51:24 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:44.675 15:51:24 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:44.675 15:51:24 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:44.934 15:51:24 -- target/filesystem.sh@69 -- # partprobe 00:08:44.934 15:51:24 -- target/filesystem.sh@70 -- # sleep 1 00:08:46.312 15:51:25 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:46.312 15:51:25 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:46.312 15:51:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:46.312 15:51:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.312 15:51:25 -- common/autotest_common.sh@10 -- # set +x 00:08:46.312 ************************************ 00:08:46.312 START TEST filesystem_in_capsule_ext4 00:08:46.312 ************************************ 00:08:46.312 15:51:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:46.312 15:51:25 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:46.312 15:51:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:46.312 15:51:25 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:46.312 15:51:25 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:46.312 15:51:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:46.312 15:51:25 -- common/autotest_common.sh@914 -- # local i=0 00:08:46.312 15:51:25 -- common/autotest_common.sh@915 -- # local force 00:08:46.312 15:51:25 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:46.312 15:51:25 -- common/autotest_common.sh@918 -- # force=-F 00:08:46.312 15:51:25 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:46.312 mke2fs 1.46.5 (30-Dec-2021) 00:08:46.312 Discarding device blocks: 0/522240 done 00:08:46.312 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:46.312 Filesystem UUID: b161a31e-6a8f-47fc-b434-4a7716e85572 00:08:46.312 Superblock backups stored on blocks: 00:08:46.312 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:46.312 00:08:46.312 Allocating group tables: 0/64 done 00:08:46.312 Writing inode tables: 0/64 done 00:08:46.312 Creating journal (8192 blocks): done 00:08:46.312 Writing superblocks and filesystem accounting information: 0/64 done 00:08:46.312 00:08:46.312 15:51:25 -- common/autotest_common.sh@931 -- # return 0 00:08:46.312 15:51:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:46.571 15:51:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:46.571 15:51:26 -- target/filesystem.sh@25 -- # sync 00:08:46.571 15:51:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:46.571 15:51:26 -- target/filesystem.sh@27 -- # sync 00:08:46.571 15:51:26 -- target/filesystem.sh@29 -- # i=0 00:08:46.571 15:51:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:46.830 15:51:26 -- target/filesystem.sh@37 -- # kill -0 2309159 00:08:46.830 15:51:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:46.830 15:51:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:46.830 15:51:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:46.830 15:51:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:46.830 00:08:46.830 real 0m0.588s 00:08:46.830 user 0m0.025s 00:08:46.830 sys 0m0.063s 00:08:46.830 15:51:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:46.830 15:51:26 -- common/autotest_common.sh@10 -- # set +x 00:08:46.830 ************************************ 00:08:46.830 END TEST filesystem_in_capsule_ext4 00:08:46.830 ************************************ 00:08:46.830 15:51:26 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:46.830 15:51:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:46.830 15:51:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.830 15:51:26 -- common/autotest_common.sh@10 -- # set +x 00:08:46.830 ************************************ 00:08:46.830 START TEST filesystem_in_capsule_btrfs 00:08:46.830 ************************************ 00:08:46.830 15:51:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:46.830 15:51:26 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:46.830 15:51:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:46.830 15:51:26 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:46.830 15:51:26 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:46.830 15:51:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:46.830 15:51:26 -- common/autotest_common.sh@914 -- # local i=0 00:08:46.830 15:51:26 -- common/autotest_common.sh@915 -- # local force 00:08:46.830 15:51:26 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:46.830 15:51:26 -- common/autotest_common.sh@920 -- # force=-f 00:08:46.830 15:51:26 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:47.397 btrfs-progs v6.6.2 00:08:47.397 See https://btrfs.readthedocs.io for more information. 00:08:47.397 00:08:47.397 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:47.397 NOTE: several default settings have changed in version 5.15, please make sure 00:08:47.397 this does not affect your deployments: 00:08:47.397 - DUP for metadata (-m dup) 00:08:47.397 - enabled no-holes (-O no-holes) 00:08:47.397 - enabled free-space-tree (-R free-space-tree) 00:08:47.397 00:08:47.397 Label: (null) 00:08:47.397 UUID: 6b1dcc79-3a0f-43df-9fc7-79919dbd5875 00:08:47.397 Node size: 16384 00:08:47.397 Sector size: 4096 00:08:47.397 Filesystem size: 510.00MiB 00:08:47.397 Block group profiles: 00:08:47.397 Data: single 8.00MiB 00:08:47.397 Metadata: DUP 32.00MiB 00:08:47.397 System: DUP 8.00MiB 00:08:47.397 SSD detected: yes 00:08:47.397 Zoned device: no 00:08:47.397 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:47.397 Runtime features: free-space-tree 00:08:47.397 Checksum: crc32c 00:08:47.397 Number of devices: 1 00:08:47.397 Devices: 00:08:47.397 ID SIZE PATH 00:08:47.397 1 510.00MiB /dev/nvme0n1p1 00:08:47.397 00:08:47.397 15:51:26 -- common/autotest_common.sh@931 -- # return 0 00:08:47.397 15:51:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:47.964 15:51:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:47.964 15:51:27 -- target/filesystem.sh@25 -- # sync 00:08:47.964 15:51:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:47.964 15:51:27 -- target/filesystem.sh@27 -- # sync 00:08:47.964 15:51:27 -- target/filesystem.sh@29 -- # i=0 00:08:47.964 15:51:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:47.964 15:51:27 -- target/filesystem.sh@37 -- # kill -0 2309159 00:08:47.964 15:51:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:47.964 15:51:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:47.964 15:51:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:47.964 15:51:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:47.964 00:08:47.964 real 0m1.071s 00:08:47.964 user 0m0.022s 00:08:47.964 sys 0m0.136s 00:08:47.964 15:51:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:47.964 15:51:27 -- common/autotest_common.sh@10 -- # set +x 00:08:47.964 ************************************ 00:08:47.964 END TEST filesystem_in_capsule_btrfs 00:08:47.964 ************************************ 00:08:47.964 15:51:27 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:47.964 15:51:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:47.964 15:51:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.964 15:51:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.222 ************************************ 00:08:48.222 START TEST filesystem_in_capsule_xfs 00:08:48.222 ************************************ 00:08:48.222 15:51:27 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:08:48.222 15:51:27 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:48.222 15:51:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:48.222 15:51:27 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:48.222 15:51:27 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:48.222 15:51:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:48.222 15:51:27 -- common/autotest_common.sh@914 -- # local i=0 00:08:48.222 15:51:27 -- common/autotest_common.sh@915 -- # local force 00:08:48.222 15:51:27 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:48.222 15:51:27 -- common/autotest_common.sh@920 -- # force=-f 00:08:48.222 15:51:27 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:48.222 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:48.222 = sectsz=512 attr=2, projid32bit=1 00:08:48.222 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:48.222 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:48.222 data = bsize=4096 blocks=130560, imaxpct=25 00:08:48.222 = sunit=0 swidth=0 blks 00:08:48.222 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:48.222 log =internal log bsize=4096 blocks=16384, version=2 00:08:48.222 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:48.222 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:49.174 Discarding blocks...Done. 00:08:49.174 15:51:28 -- common/autotest_common.sh@931 -- # return 0 00:08:49.174 15:51:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:51.073 15:51:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:51.073 15:51:30 -- target/filesystem.sh@25 -- # sync 00:08:51.073 15:51:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:51.073 15:51:30 -- target/filesystem.sh@27 -- # sync 00:08:51.073 15:51:30 -- target/filesystem.sh@29 -- # i=0 00:08:51.073 15:51:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:51.073 15:51:30 -- target/filesystem.sh@37 -- # kill -0 2309159 00:08:51.073 15:51:30 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:51.073 15:51:30 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:51.073 15:51:30 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:51.073 15:51:30 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:51.073 00:08:51.073 real 0m2.778s 00:08:51.073 user 0m0.020s 00:08:51.073 sys 0m0.074s 00:08:51.073 15:51:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:51.073 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.073 ************************************ 00:08:51.073 END TEST filesystem_in_capsule_xfs 00:08:51.073 ************************************ 00:08:51.073 15:51:30 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:51.073 15:51:30 -- target/filesystem.sh@93 -- # sync 00:08:51.073 15:51:30 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:51.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.332 15:51:30 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:51.332 15:51:30 -- common/autotest_common.sh@1205 -- # local i=0 00:08:51.332 15:51:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:51.332 15:51:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.332 15:51:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:51.332 15:51:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.332 15:51:30 -- common/autotest_common.sh@1217 -- # return 0 00:08:51.332 15:51:30 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:51.332 15:51:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:51.332 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.332 15:51:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:51.332 15:51:30 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:51.332 15:51:30 -- target/filesystem.sh@101 -- # killprocess 2309159 00:08:51.332 15:51:30 -- common/autotest_common.sh@936 -- # '[' -z 2309159 ']' 00:08:51.332 15:51:30 -- common/autotest_common.sh@940 -- # kill -0 2309159 00:08:51.332 15:51:30 -- common/autotest_common.sh@941 -- # uname 00:08:51.332 15:51:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:51.332 15:51:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2309159 00:08:51.332 15:51:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:51.332 15:51:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:51.332 15:51:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2309159' 00:08:51.332 killing process with pid 2309159 00:08:51.332 15:51:30 -- common/autotest_common.sh@955 -- # kill 2309159 00:08:51.332 15:51:30 -- common/autotest_common.sh@960 -- # wait 2309159 00:08:54.618 15:51:33 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:54.618 00:08:54.618 real 0m14.575s 00:08:54.618 user 0m55.071s 00:08:54.618 sys 0m1.535s 00:08:54.618 15:51:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:54.618 15:51:33 -- common/autotest_common.sh@10 -- # set +x 00:08:54.618 ************************************ 00:08:54.618 END TEST nvmf_filesystem_in_capsule 00:08:54.618 ************************************ 00:08:54.618 15:51:33 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:54.618 15:51:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:54.618 15:51:33 -- nvmf/common.sh@117 -- # sync 00:08:54.618 15:51:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.618 15:51:33 -- nvmf/common.sh@120 -- # set +e 00:08:54.618 15:51:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.618 15:51:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.618 rmmod nvme_tcp 00:08:54.618 rmmod nvme_fabrics 00:08:54.618 rmmod nvme_keyring 00:08:54.618 15:51:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.618 15:51:33 -- nvmf/common.sh@124 -- # set -e 00:08:54.618 15:51:33 -- nvmf/common.sh@125 -- # return 0 00:08:54.618 15:51:33 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:54.618 15:51:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:54.618 15:51:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:54.618 15:51:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:54.618 15:51:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.618 15:51:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.618 15:51:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.618 15:51:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.618 15:51:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.525 15:51:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:56.525 00:08:56.525 real 0m38.684s 00:08:56.525 user 1m57.540s 00:08:56.525 sys 0m7.315s 00:08:56.525 15:51:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:56.525 15:51:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.525 ************************************ 00:08:56.525 END TEST nvmf_filesystem 00:08:56.525 ************************************ 00:08:56.525 15:51:35 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:56.525 15:51:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:56.525 15:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.525 15:51:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.525 ************************************ 00:08:56.525 START TEST nvmf_discovery 00:08:56.525 ************************************ 00:08:56.525 15:51:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:56.525 * Looking for test storage... 00:08:56.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.525 15:51:36 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.525 15:51:36 -- nvmf/common.sh@7 -- # uname -s 00:08:56.525 15:51:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.525 15:51:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.525 15:51:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.525 15:51:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.525 15:51:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.525 15:51:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.525 15:51:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.525 15:51:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.525 15:51:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.525 15:51:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.525 15:51:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:56.525 15:51:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:56.525 15:51:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.525 15:51:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.525 15:51:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.525 15:51:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.525 15:51:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.525 15:51:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.525 15:51:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.525 15:51:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.525 15:51:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.525 15:51:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.525 15:51:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.525 15:51:36 -- paths/export.sh@5 -- # export PATH 00:08:56.525 15:51:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.525 15:51:36 -- nvmf/common.sh@47 -- # : 0 00:08:56.525 15:51:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.525 15:51:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.525 15:51:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.525 15:51:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.525 15:51:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.525 15:51:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.525 15:51:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.525 15:51:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.525 15:51:36 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:56.525 15:51:36 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:56.525 15:51:36 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:56.525 15:51:36 -- target/discovery.sh@15 -- # hash nvme 00:08:56.525 15:51:36 -- target/discovery.sh@20 -- # nvmftestinit 00:08:56.525 15:51:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:56.525 15:51:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.525 15:51:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:56.525 15:51:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:56.525 15:51:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:56.525 15:51:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.525 15:51:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.525 15:51:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.525 15:51:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:56.525 15:51:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:56.525 15:51:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:56.525 15:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:01.866 15:51:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:01.866 15:51:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.866 15:51:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.866 15:51:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.866 15:51:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.866 15:51:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.866 15:51:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.866 15:51:40 -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.866 15:51:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.866 15:51:40 -- nvmf/common.sh@296 -- # e810=() 00:09:01.866 15:51:40 -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.866 15:51:40 -- nvmf/common.sh@297 -- # x722=() 00:09:01.866 15:51:40 -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.866 15:51:40 -- nvmf/common.sh@298 -- # mlx=() 00:09:01.866 15:51:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.866 15:51:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.866 15:51:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.866 15:51:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.866 15:51:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.866 15:51:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.866 15:51:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:01.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:01.866 15:51:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.866 15:51:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:01.866 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:01.866 15:51:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.866 15:51:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.866 15:51:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.866 15:51:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:01.866 15:51:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.866 15:51:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:01.866 Found net devices under 0000:86:00.0: cvl_0_0 00:09:01.866 15:51:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.866 15:51:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.866 15:51:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.866 15:51:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:01.866 15:51:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.866 15:51:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:01.866 Found net devices under 0000:86:00.1: cvl_0_1 00:09:01.866 15:51:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.866 15:51:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:01.866 15:51:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:01.866 15:51:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:01.866 15:51:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:01.866 15:51:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.866 15:51:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.866 15:51:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.866 15:51:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.866 15:51:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.866 15:51:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.866 15:51:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.866 15:51:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.866 15:51:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.866 15:51:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.866 15:51:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.866 15:51:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.866 15:51:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.866 15:51:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.866 15:51:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.866 15:51:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.866 15:51:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.866 15:51:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.866 15:51:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.866 15:51:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:09:01.866 00:09:01.866 --- 10.0.0.2 ping statistics --- 00:09:01.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.866 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:01.866 15:51:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:09:01.866 00:09:01.866 --- 10.0.0.1 ping statistics --- 00:09:01.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.866 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:09:01.866 15:51:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.866 15:51:41 -- nvmf/common.sh@411 -- # return 0 00:09:01.866 15:51:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:01.866 15:51:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.866 15:51:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:01.866 15:51:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:01.866 15:51:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.866 15:51:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:01.866 15:51:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:01.866 15:51:41 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:01.866 15:51:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:01.866 15:51:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:01.866 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:09:01.866 15:51:41 -- nvmf/common.sh@470 -- # nvmfpid=2315228 00:09:01.866 15:51:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.866 15:51:41 -- nvmf/common.sh@471 -- # waitforlisten 2315228 00:09:01.866 15:51:41 -- common/autotest_common.sh@817 -- # '[' -z 2315228 ']' 00:09:01.866 15:51:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.866 15:51:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:01.866 15:51:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.866 15:51:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:01.866 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:09:01.866 [2024-04-26 15:51:41.353240] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:01.866 [2024-04-26 15:51:41.353325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.866 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.866 [2024-04-26 15:51:41.462684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.129 [2024-04-26 15:51:41.694647] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.129 [2024-04-26 15:51:41.694694] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.129 [2024-04-26 15:51:41.694704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.129 [2024-04-26 15:51:41.694715] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.129 [2024-04-26 15:51:41.694722] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.129 [2024-04-26 15:51:41.694801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.129 [2024-04-26 15:51:41.694881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.129 [2024-04-26 15:51:41.694932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.129 [2024-04-26 15:51:41.694940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.697 15:51:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:02.697 15:51:42 -- common/autotest_common.sh@850 -- # return 0 00:09:02.697 15:51:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:02.697 15:51:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:02.697 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.697 15:51:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.697 15:51:42 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.697 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.697 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.697 [2024-04-26 15:51:42.175685] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.697 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.697 15:51:42 -- target/discovery.sh@26 -- # seq 1 4 00:09:02.697 15:51:42 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.697 15:51:42 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:02.697 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.697 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.697 Null1 00:09:02.697 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 [2024-04-26 15:51:42.224001] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.698 15:51:42 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 Null2 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.698 15:51:42 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 Null3 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.698 15:51:42 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 Null4 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:02.698 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.698 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.698 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.698 15:51:42 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:02.957 00:09:02.957 Discovery Log Number of Records 6, Generation counter 6 00:09:02.957 =====Discovery Log Entry 0====== 00:09:02.957 trtype: tcp 00:09:02.957 adrfam: ipv4 00:09:02.957 subtype: current discovery subsystem 00:09:02.957 treq: not required 00:09:02.957 portid: 0 00:09:02.957 trsvcid: 4420 00:09:02.957 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:02.957 traddr: 10.0.0.2 00:09:02.957 eflags: explicit discovery connections, duplicate discovery information 00:09:02.957 sectype: none 00:09:02.957 =====Discovery Log Entry 1====== 00:09:02.957 trtype: tcp 00:09:02.957 adrfam: ipv4 00:09:02.957 subtype: nvme subsystem 00:09:02.957 treq: not required 00:09:02.957 portid: 0 00:09:02.957 trsvcid: 4420 00:09:02.957 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:02.957 traddr: 10.0.0.2 00:09:02.957 eflags: none 00:09:02.957 sectype: none 00:09:02.957 =====Discovery Log Entry 2====== 00:09:02.957 trtype: tcp 00:09:02.957 adrfam: ipv4 00:09:02.957 subtype: nvme subsystem 00:09:02.957 treq: not required 00:09:02.957 portid: 0 00:09:02.957 trsvcid: 4420 00:09:02.957 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:02.957 traddr: 10.0.0.2 00:09:02.957 eflags: none 00:09:02.957 sectype: none 00:09:02.957 =====Discovery Log Entry 3====== 00:09:02.957 trtype: tcp 00:09:02.957 adrfam: ipv4 00:09:02.957 subtype: nvme subsystem 00:09:02.957 treq: not required 00:09:02.957 portid: 0 00:09:02.957 trsvcid: 4420 00:09:02.957 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:02.957 traddr: 10.0.0.2 00:09:02.957 eflags: none 00:09:02.957 sectype: none 00:09:02.957 =====Discovery Log Entry 4====== 00:09:02.957 trtype: tcp 00:09:02.957 adrfam: ipv4 00:09:02.957 subtype: nvme subsystem 00:09:02.957 treq: not required 00:09:02.957 portid: 0 00:09:02.957 trsvcid: 4420 00:09:02.957 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:02.957 traddr: 10.0.0.2 00:09:02.957 eflags: none 00:09:02.957 sectype: none 00:09:02.957 =====Discovery Log Entry 5====== 00:09:02.957 trtype: tcp 00:09:02.957 adrfam: ipv4 00:09:02.957 subtype: discovery subsystem referral 00:09:02.957 treq: not required 00:09:02.957 portid: 0 00:09:02.957 trsvcid: 4430 00:09:02.957 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:02.957 traddr: 10.0.0.2 00:09:02.957 eflags: none 00:09:02.957 sectype: none 00:09:02.957 15:51:42 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:02.957 Perform nvmf subsystem discovery via RPC 00:09:02.957 15:51:42 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:02.957 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.957 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.957 [2024-04-26 15:51:42.516793] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:02.957 [ 00:09:02.957 { 00:09:02.957 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:02.957 "subtype": "Discovery", 00:09:02.957 "listen_addresses": [ 00:09:02.957 { 00:09:02.957 "transport": "TCP", 00:09:02.957 "trtype": "TCP", 00:09:02.957 "adrfam": "IPv4", 00:09:02.957 "traddr": "10.0.0.2", 00:09:02.957 "trsvcid": "4420" 00:09:02.957 } 00:09:02.957 ], 00:09:02.957 "allow_any_host": true, 00:09:02.957 "hosts": [] 00:09:02.957 }, 00:09:02.957 { 00:09:02.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:02.957 "subtype": "NVMe", 00:09:02.957 "listen_addresses": [ 00:09:02.957 { 00:09:02.957 "transport": "TCP", 00:09:02.957 "trtype": "TCP", 00:09:02.957 "adrfam": "IPv4", 00:09:02.957 "traddr": "10.0.0.2", 00:09:02.957 "trsvcid": "4420" 00:09:02.957 } 00:09:02.957 ], 00:09:02.957 "allow_any_host": true, 00:09:02.957 "hosts": [], 00:09:02.957 "serial_number": "SPDK00000000000001", 00:09:02.957 "model_number": "SPDK bdev Controller", 00:09:02.957 "max_namespaces": 32, 00:09:02.957 "min_cntlid": 1, 00:09:02.957 "max_cntlid": 65519, 00:09:02.957 "namespaces": [ 00:09:02.957 { 00:09:02.958 "nsid": 1, 00:09:02.958 "bdev_name": "Null1", 00:09:02.958 "name": "Null1", 00:09:02.958 "nguid": "75AB75A3A29A4A638888C93E65B963A9", 00:09:02.958 "uuid": "75ab75a3-a29a-4a63-8888-c93e65b963a9" 00:09:02.958 } 00:09:02.958 ] 00:09:02.958 }, 00:09:02.958 { 00:09:02.958 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:02.958 "subtype": "NVMe", 00:09:02.958 "listen_addresses": [ 00:09:02.958 { 00:09:02.958 "transport": "TCP", 00:09:02.958 "trtype": "TCP", 00:09:02.958 "adrfam": "IPv4", 00:09:02.958 "traddr": "10.0.0.2", 00:09:02.958 "trsvcid": "4420" 00:09:02.958 } 00:09:02.958 ], 00:09:02.958 "allow_any_host": true, 00:09:02.958 "hosts": [], 00:09:02.958 "serial_number": "SPDK00000000000002", 00:09:02.958 "model_number": "SPDK bdev Controller", 00:09:02.958 "max_namespaces": 32, 00:09:02.958 "min_cntlid": 1, 00:09:02.958 "max_cntlid": 65519, 00:09:02.958 "namespaces": [ 00:09:02.958 { 00:09:02.958 "nsid": 1, 00:09:02.958 "bdev_name": "Null2", 00:09:02.958 "name": "Null2", 00:09:02.958 "nguid": "CE83DF36D6764794911F82F1E0D77537", 00:09:02.958 "uuid": "ce83df36-d676-4794-911f-82f1e0d77537" 00:09:02.958 } 00:09:02.958 ] 00:09:02.958 }, 00:09:02.958 { 00:09:02.958 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:02.958 "subtype": "NVMe", 00:09:02.958 "listen_addresses": [ 00:09:02.958 { 00:09:02.958 "transport": "TCP", 00:09:02.958 "trtype": "TCP", 00:09:02.958 "adrfam": "IPv4", 00:09:02.958 "traddr": "10.0.0.2", 00:09:02.958 "trsvcid": "4420" 00:09:02.958 } 00:09:02.958 ], 00:09:02.958 "allow_any_host": true, 00:09:02.958 "hosts": [], 00:09:02.958 "serial_number": "SPDK00000000000003", 00:09:02.958 "model_number": "SPDK bdev Controller", 00:09:02.958 "max_namespaces": 32, 00:09:02.958 "min_cntlid": 1, 00:09:02.958 "max_cntlid": 65519, 00:09:02.958 "namespaces": [ 00:09:02.958 { 00:09:02.958 "nsid": 1, 00:09:02.958 "bdev_name": "Null3", 00:09:02.958 "name": "Null3", 00:09:02.958 "nguid": "4BF83B94BC7341ADA597CB3921120773", 00:09:02.958 "uuid": "4bf83b94-bc73-41ad-a597-cb3921120773" 00:09:02.958 } 00:09:02.958 ] 00:09:02.958 }, 00:09:02.958 { 00:09:02.958 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:02.958 "subtype": "NVMe", 00:09:02.958 "listen_addresses": [ 00:09:02.958 { 00:09:02.958 "transport": "TCP", 00:09:02.958 "trtype": "TCP", 00:09:02.958 "adrfam": "IPv4", 00:09:02.958 "traddr": "10.0.0.2", 00:09:02.958 "trsvcid": "4420" 00:09:02.958 } 00:09:02.958 ], 00:09:02.958 "allow_any_host": true, 00:09:02.958 "hosts": [], 00:09:02.958 "serial_number": "SPDK00000000000004", 00:09:02.958 "model_number": "SPDK bdev Controller", 00:09:02.958 "max_namespaces": 32, 00:09:02.958 "min_cntlid": 1, 00:09:02.958 "max_cntlid": 65519, 00:09:02.958 "namespaces": [ 00:09:02.958 { 00:09:02.958 "nsid": 1, 00:09:02.958 "bdev_name": "Null4", 00:09:02.958 "name": "Null4", 00:09:02.958 "nguid": "3E980AFA6FAA4016B8D5954C7A150F46", 00:09:02.958 "uuid": "3e980afa-6faa-4016-b8d5-954c7a150f46" 00:09:02.958 } 00:09:02.958 ] 00:09:02.958 } 00:09:02.958 ] 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@42 -- # seq 1 4 00:09:02.958 15:51:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:02.958 15:51:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:02.958 15:51:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:02.958 15:51:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:02.958 15:51:42 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:02.958 15:51:42 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:02.958 15:51:42 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:02.958 15:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:02.958 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:09:02.958 15:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.217 15:51:42 -- target/discovery.sh@49 -- # check_bdevs= 00:09:03.217 15:51:42 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:03.217 15:51:42 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:03.217 15:51:42 -- target/discovery.sh@57 -- # nvmftestfini 00:09:03.217 15:51:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:03.217 15:51:42 -- nvmf/common.sh@117 -- # sync 00:09:03.217 15:51:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.217 15:51:42 -- nvmf/common.sh@120 -- # set +e 00:09:03.217 15:51:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.217 15:51:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.217 rmmod nvme_tcp 00:09:03.217 rmmod nvme_fabrics 00:09:03.217 rmmod nvme_keyring 00:09:03.217 15:51:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.217 15:51:42 -- nvmf/common.sh@124 -- # set -e 00:09:03.217 15:51:42 -- nvmf/common.sh@125 -- # return 0 00:09:03.217 15:51:42 -- nvmf/common.sh@478 -- # '[' -n 2315228 ']' 00:09:03.217 15:51:42 -- nvmf/common.sh@479 -- # killprocess 2315228 00:09:03.217 15:51:42 -- common/autotest_common.sh@936 -- # '[' -z 2315228 ']' 00:09:03.217 15:51:42 -- common/autotest_common.sh@940 -- # kill -0 2315228 00:09:03.217 15:51:42 -- common/autotest_common.sh@941 -- # uname 00:09:03.217 15:51:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:03.217 15:51:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2315228 00:09:03.217 15:51:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:03.217 15:51:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:03.217 15:51:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2315228' 00:09:03.217 killing process with pid 2315228 00:09:03.217 15:51:42 -- common/autotest_common.sh@955 -- # kill 2315228 00:09:03.217 [2024-04-26 15:51:42.785504] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:03.217 15:51:42 -- common/autotest_common.sh@960 -- # wait 2315228 00:09:04.592 15:51:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:04.592 15:51:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:04.592 15:51:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:04.592 15:51:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.592 15:51:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.592 15:51:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.592 15:51:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.592 15:51:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.498 15:51:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.498 00:09:06.498 real 0m10.155s 00:09:06.498 user 0m9.656s 00:09:06.498 sys 0m4.356s 00:09:06.498 15:51:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:06.498 15:51:46 -- common/autotest_common.sh@10 -- # set +x 00:09:06.498 ************************************ 00:09:06.498 END TEST nvmf_discovery 00:09:06.498 ************************************ 00:09:06.757 15:51:46 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:06.757 15:51:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:06.757 15:51:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:06.757 15:51:46 -- common/autotest_common.sh@10 -- # set +x 00:09:06.757 ************************************ 00:09:06.757 START TEST nvmf_referrals 00:09:06.757 ************************************ 00:09:06.757 15:51:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:07.016 * Looking for test storage... 00:09:07.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.016 15:51:46 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.016 15:51:46 -- nvmf/common.sh@7 -- # uname -s 00:09:07.016 15:51:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.016 15:51:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.016 15:51:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.016 15:51:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.016 15:51:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.016 15:51:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.016 15:51:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.016 15:51:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.016 15:51:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.016 15:51:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.016 15:51:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:07.016 15:51:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:07.016 15:51:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.016 15:51:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.016 15:51:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.016 15:51:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.016 15:51:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.016 15:51:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.016 15:51:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.016 15:51:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.016 15:51:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.016 15:51:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.016 15:51:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.016 15:51:46 -- paths/export.sh@5 -- # export PATH 00:09:07.016 15:51:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.016 15:51:46 -- nvmf/common.sh@47 -- # : 0 00:09:07.016 15:51:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.016 15:51:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.016 15:51:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.016 15:51:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.016 15:51:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.016 15:51:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.016 15:51:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.016 15:51:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.016 15:51:46 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:07.016 15:51:46 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:07.016 15:51:46 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:07.016 15:51:46 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:07.016 15:51:46 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:07.016 15:51:46 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:07.016 15:51:46 -- target/referrals.sh@37 -- # nvmftestinit 00:09:07.016 15:51:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:07.016 15:51:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.016 15:51:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:07.016 15:51:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:07.016 15:51:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:07.016 15:51:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.016 15:51:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.016 15:51:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.016 15:51:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:07.016 15:51:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:07.016 15:51:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:07.016 15:51:46 -- common/autotest_common.sh@10 -- # set +x 00:09:12.296 15:51:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:12.296 15:51:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.296 15:51:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.296 15:51:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.296 15:51:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.296 15:51:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.296 15:51:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.296 15:51:51 -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.296 15:51:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.296 15:51:51 -- nvmf/common.sh@296 -- # e810=() 00:09:12.296 15:51:51 -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.296 15:51:51 -- nvmf/common.sh@297 -- # x722=() 00:09:12.296 15:51:51 -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.296 15:51:51 -- nvmf/common.sh@298 -- # mlx=() 00:09:12.296 15:51:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.296 15:51:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.296 15:51:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.296 15:51:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.296 15:51:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.296 15:51:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.296 15:51:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:12.296 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:12.296 15:51:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.296 15:51:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:12.296 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:12.296 15:51:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.296 15:51:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.296 15:51:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.296 15:51:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:12.296 15:51:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.296 15:51:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:12.296 Found net devices under 0000:86:00.0: cvl_0_0 00:09:12.296 15:51:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.296 15:51:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.296 15:51:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.296 15:51:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:12.296 15:51:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.296 15:51:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:12.296 Found net devices under 0000:86:00.1: cvl_0_1 00:09:12.296 15:51:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.296 15:51:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:12.296 15:51:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:12.296 15:51:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:12.296 15:51:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:12.296 15:51:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.296 15:51:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.296 15:51:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.296 15:51:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.296 15:51:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.296 15:51:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.296 15:51:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.296 15:51:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.296 15:51:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.296 15:51:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.296 15:51:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.296 15:51:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.296 15:51:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.296 15:51:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.296 15:51:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.296 15:51:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.296 15:51:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.296 15:51:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.296 15:51:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.296 15:51:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:09:12.296 00:09:12.296 --- 10.0.0.2 ping statistics --- 00:09:12.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.296 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:09:12.296 15:51:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:09:12.296 00:09:12.296 --- 10.0.0.1 ping statistics --- 00:09:12.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.296 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:12.296 15:51:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.296 15:51:51 -- nvmf/common.sh@411 -- # return 0 00:09:12.296 15:51:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:12.297 15:51:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.297 15:51:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:12.297 15:51:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:12.297 15:51:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.297 15:51:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:12.297 15:51:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:12.297 15:51:51 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:12.297 15:51:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:12.297 15:51:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:12.297 15:51:51 -- common/autotest_common.sh@10 -- # set +x 00:09:12.297 15:51:51 -- nvmf/common.sh@470 -- # nvmfpid=2319066 00:09:12.297 15:51:51 -- nvmf/common.sh@471 -- # waitforlisten 2319066 00:09:12.297 15:51:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.297 15:51:51 -- common/autotest_common.sh@817 -- # '[' -z 2319066 ']' 00:09:12.297 15:51:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.297 15:51:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:12.297 15:51:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.297 15:51:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:12.297 15:51:51 -- common/autotest_common.sh@10 -- # set +x 00:09:12.556 [2024-04-26 15:51:51.989772] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:12.556 [2024-04-26 15:51:51.989861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.556 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.556 [2024-04-26 15:51:52.100806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.815 [2024-04-26 15:51:52.322757] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.815 [2024-04-26 15:51:52.322803] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.815 [2024-04-26 15:51:52.322813] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.815 [2024-04-26 15:51:52.322823] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.815 [2024-04-26 15:51:52.322830] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.815 [2024-04-26 15:51:52.322905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.815 [2024-04-26 15:51:52.322985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.815 [2024-04-26 15:51:52.323047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.815 [2024-04-26 15:51:52.323055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.384 15:51:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:13.384 15:51:52 -- common/autotest_common.sh@850 -- # return 0 00:09:13.384 15:51:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:13.384 15:51:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:13.384 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 15:51:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.384 15:51:52 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.384 15:51:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.384 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 [2024-04-26 15:51:52.809713] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.384 15:51:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.384 15:51:52 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:13.384 15:51:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.384 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 [2024-04-26 15:51:52.825973] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:13.384 15:51:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.384 15:51:52 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:13.384 15:51:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.384 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 15:51:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.384 15:51:52 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:13.384 15:51:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.384 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 15:51:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.384 15:51:52 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:13.384 15:51:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.384 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 15:51:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.384 15:51:52 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:13.384 15:51:52 -- target/referrals.sh@48 -- # jq length 00:09:13.384 15:51:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.384 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 15:51:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.384 15:51:52 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:13.384 15:51:52 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:13.384 15:51:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:13.384 15:51:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:13.384 15:51:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:13.384 15:51:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.384 15:51:52 -- target/referrals.sh@21 -- # sort 00:09:13.384 15:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:13.384 15:51:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.384 15:51:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:13.384 15:51:52 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:13.384 15:51:52 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:13.384 15:51:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:13.384 15:51:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:13.384 15:51:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:13.384 15:51:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:13.384 15:51:52 -- target/referrals.sh@26 -- # sort 00:09:13.644 15:51:53 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:13.644 15:51:53 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:13.644 15:51:53 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:13.644 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.644 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:13.644 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.644 15:51:53 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:13.644 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.644 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:13.644 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.644 15:51:53 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:13.644 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.644 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:13.644 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.644 15:51:53 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:13.644 15:51:53 -- target/referrals.sh@56 -- # jq length 00:09:13.644 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.644 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:13.644 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.644 15:51:53 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:13.644 15:51:53 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:13.644 15:51:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:13.644 15:51:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:13.644 15:51:53 -- target/referrals.sh@26 -- # sort 00:09:13.644 15:51:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:13.644 15:51:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:13.903 15:51:53 -- target/referrals.sh@26 -- # echo 00:09:13.903 15:51:53 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:13.903 15:51:53 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:13.903 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.903 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:13.903 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.903 15:51:53 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:13.903 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.903 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:13.903 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.903 15:51:53 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:13.903 15:51:53 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:13.903 15:51:53 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:13.903 15:51:53 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:13.903 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.903 15:51:53 -- target/referrals.sh@21 -- # sort 00:09:13.903 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:13.903 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.903 15:51:53 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:13.903 15:51:53 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:13.903 15:51:53 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:13.903 15:51:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:13.903 15:51:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:13.903 15:51:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:13.903 15:51:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:13.903 15:51:53 -- target/referrals.sh@26 -- # sort 00:09:13.903 15:51:53 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:13.903 15:51:53 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:13.903 15:51:53 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:13.903 15:51:53 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:13.903 15:51:53 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:13.903 15:51:53 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:13.903 15:51:53 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:14.162 15:51:53 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:14.162 15:51:53 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:14.162 15:51:53 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:14.162 15:51:53 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:14.162 15:51:53 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.162 15:51:53 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:14.162 15:51:53 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:14.162 15:51:53 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:14.162 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.162 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.162 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.162 15:51:53 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:14.162 15:51:53 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:14.162 15:51:53 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:14.162 15:51:53 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:14.162 15:51:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.162 15:51:53 -- target/referrals.sh@21 -- # sort 00:09:14.162 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.162 15:51:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.162 15:51:53 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:14.162 15:51:53 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:14.421 15:51:53 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:14.421 15:51:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:14.421 15:51:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:14.421 15:51:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.421 15:51:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:14.421 15:51:53 -- target/referrals.sh@26 -- # sort 00:09:14.421 15:51:54 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:14.421 15:51:54 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:14.421 15:51:54 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:14.421 15:51:54 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:14.421 15:51:54 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:14.421 15:51:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.421 15:51:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:14.679 15:51:54 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:14.679 15:51:54 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:14.679 15:51:54 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:14.679 15:51:54 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:14.679 15:51:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.679 15:51:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:14.679 15:51:54 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:14.679 15:51:54 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:14.679 15:51:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.679 15:51:54 -- common/autotest_common.sh@10 -- # set +x 00:09:14.679 15:51:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.679 15:51:54 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:14.680 15:51:54 -- target/referrals.sh@82 -- # jq length 00:09:14.680 15:51:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.680 15:51:54 -- common/autotest_common.sh@10 -- # set +x 00:09:14.680 15:51:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.680 15:51:54 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:14.680 15:51:54 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:14.680 15:51:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:14.680 15:51:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:14.680 15:51:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:14.680 15:51:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:14.680 15:51:54 -- target/referrals.sh@26 -- # sort 00:09:14.939 15:51:54 -- target/referrals.sh@26 -- # echo 00:09:14.939 15:51:54 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:14.939 15:51:54 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:14.939 15:51:54 -- target/referrals.sh@86 -- # nvmftestfini 00:09:14.939 15:51:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:14.939 15:51:54 -- nvmf/common.sh@117 -- # sync 00:09:14.939 15:51:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.939 15:51:54 -- nvmf/common.sh@120 -- # set +e 00:09:14.939 15:51:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.939 15:51:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.939 rmmod nvme_tcp 00:09:14.939 rmmod nvme_fabrics 00:09:14.939 rmmod nvme_keyring 00:09:14.939 15:51:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.939 15:51:54 -- nvmf/common.sh@124 -- # set -e 00:09:14.939 15:51:54 -- nvmf/common.sh@125 -- # return 0 00:09:14.939 15:51:54 -- nvmf/common.sh@478 -- # '[' -n 2319066 ']' 00:09:14.939 15:51:54 -- nvmf/common.sh@479 -- # killprocess 2319066 00:09:14.939 15:51:54 -- common/autotest_common.sh@936 -- # '[' -z 2319066 ']' 00:09:14.939 15:51:54 -- common/autotest_common.sh@940 -- # kill -0 2319066 00:09:14.939 15:51:54 -- common/autotest_common.sh@941 -- # uname 00:09:14.939 15:51:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:14.939 15:51:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2319066 00:09:14.939 15:51:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:14.939 15:51:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:14.939 15:51:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2319066' 00:09:14.939 killing process with pid 2319066 00:09:14.939 15:51:54 -- common/autotest_common.sh@955 -- # kill 2319066 00:09:14.939 15:51:54 -- common/autotest_common.sh@960 -- # wait 2319066 00:09:16.320 15:51:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:16.320 15:51:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:16.320 15:51:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:16.320 15:51:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.320 15:51:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:16.320 15:51:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.320 15:51:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.320 15:51:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.227 15:51:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:18.227 00:09:18.227 real 0m11.527s 00:09:18.227 user 0m14.860s 00:09:18.227 sys 0m4.865s 00:09:18.227 15:51:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:18.227 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:18.227 ************************************ 00:09:18.227 END TEST nvmf_referrals 00:09:18.227 ************************************ 00:09:18.487 15:51:57 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:18.487 15:51:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:18.487 15:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.487 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:09:18.487 ************************************ 00:09:18.487 START TEST nvmf_connect_disconnect 00:09:18.487 ************************************ 00:09:18.487 15:51:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:18.487 * Looking for test storage... 00:09:18.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.487 15:51:58 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.487 15:51:58 -- nvmf/common.sh@7 -- # uname -s 00:09:18.487 15:51:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.487 15:51:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.487 15:51:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.487 15:51:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.487 15:51:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.487 15:51:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.487 15:51:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.487 15:51:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.487 15:51:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.487 15:51:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.487 15:51:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:18.487 15:51:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:18.487 15:51:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.487 15:51:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.487 15:51:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.487 15:51:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.487 15:51:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.747 15:51:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.747 15:51:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.747 15:51:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.748 15:51:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.748 15:51:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.748 15:51:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.748 15:51:58 -- paths/export.sh@5 -- # export PATH 00:09:18.748 15:51:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.748 15:51:58 -- nvmf/common.sh@47 -- # : 0 00:09:18.748 15:51:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.748 15:51:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.748 15:51:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.748 15:51:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.748 15:51:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.748 15:51:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.748 15:51:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.748 15:51:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.748 15:51:58 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.748 15:51:58 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.748 15:51:58 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:18.748 15:51:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:18.748 15:51:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.748 15:51:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:18.748 15:51:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:18.748 15:51:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:18.748 15:51:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.748 15:51:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.748 15:51:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.748 15:51:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:18.748 15:51:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:18.748 15:51:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.748 15:51:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.026 15:52:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:24.026 15:52:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.026 15:52:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.026 15:52:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.026 15:52:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.026 15:52:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.026 15:52:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.026 15:52:02 -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.026 15:52:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.026 15:52:02 -- nvmf/common.sh@296 -- # e810=() 00:09:24.026 15:52:02 -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.026 15:52:02 -- nvmf/common.sh@297 -- # x722=() 00:09:24.026 15:52:02 -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.026 15:52:02 -- nvmf/common.sh@298 -- # mlx=() 00:09:24.026 15:52:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.026 15:52:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.026 15:52:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.026 15:52:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.026 15:52:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.026 15:52:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.026 15:52:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:24.026 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:24.026 15:52:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.026 15:52:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:24.026 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:24.026 15:52:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.026 15:52:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.026 15:52:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.027 15:52:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.027 15:52:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.027 15:52:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.027 15:52:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:24.027 15:52:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.027 15:52:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:24.027 Found net devices under 0000:86:00.0: cvl_0_0 00:09:24.027 15:52:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.027 15:52:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.027 15:52:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.027 15:52:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:24.027 15:52:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.027 15:52:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:24.027 Found net devices under 0000:86:00.1: cvl_0_1 00:09:24.027 15:52:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.027 15:52:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:24.027 15:52:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:24.027 15:52:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:24.027 15:52:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:24.027 15:52:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:24.027 15:52:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.027 15:52:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.027 15:52:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.027 15:52:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.027 15:52:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.027 15:52:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.027 15:52:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.027 15:52:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.027 15:52:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.027 15:52:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.027 15:52:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.027 15:52:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.027 15:52:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.027 15:52:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.027 15:52:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.027 15:52:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.027 15:52:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.027 15:52:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.027 15:52:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.027 15:52:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:09:24.027 00:09:24.027 --- 10.0.0.2 ping statistics --- 00:09:24.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.027 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:09:24.027 15:52:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:09:24.027 00:09:24.027 --- 10.0.0.1 ping statistics --- 00:09:24.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.027 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:09:24.027 15:52:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.027 15:52:03 -- nvmf/common.sh@411 -- # return 0 00:09:24.027 15:52:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:24.027 15:52:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.027 15:52:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:24.027 15:52:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:24.027 15:52:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.027 15:52:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:24.027 15:52:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:24.027 15:52:03 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:24.027 15:52:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:24.027 15:52:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:24.027 15:52:03 -- common/autotest_common.sh@10 -- # set +x 00:09:24.027 15:52:03 -- nvmf/common.sh@470 -- # nvmfpid=2323155 00:09:24.027 15:52:03 -- nvmf/common.sh@471 -- # waitforlisten 2323155 00:09:24.027 15:52:03 -- common/autotest_common.sh@817 -- # '[' -z 2323155 ']' 00:09:24.027 15:52:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.027 15:52:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:24.027 15:52:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.027 15:52:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:24.027 15:52:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.027 15:52:03 -- common/autotest_common.sh@10 -- # set +x 00:09:24.027 [2024-04-26 15:52:03.116466] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:24.027 [2024-04-26 15:52:03.116555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.027 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.027 [2024-04-26 15:52:03.226757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.027 [2024-04-26 15:52:03.451226] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.027 [2024-04-26 15:52:03.451271] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.027 [2024-04-26 15:52:03.451281] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.027 [2024-04-26 15:52:03.451291] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.027 [2024-04-26 15:52:03.451298] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.027 [2024-04-26 15:52:03.451389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.027 [2024-04-26 15:52:03.451475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.027 [2024-04-26 15:52:03.451537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.027 [2024-04-26 15:52:03.451545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.287 15:52:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:24.287 15:52:03 -- common/autotest_common.sh@850 -- # return 0 00:09:24.287 15:52:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:24.287 15:52:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:24.287 15:52:03 -- common/autotest_common.sh@10 -- # set +x 00:09:24.287 15:52:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.287 15:52:03 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:24.287 15:52:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.287 15:52:03 -- common/autotest_common.sh@10 -- # set +x 00:09:24.287 [2024-04-26 15:52:03.929545] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.287 15:52:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.287 15:52:03 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:24.287 15:52:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.287 15:52:03 -- common/autotest_common.sh@10 -- # set +x 00:09:24.546 15:52:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.546 15:52:04 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:24.546 15:52:04 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:24.546 15:52:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.546 15:52:04 -- common/autotest_common.sh@10 -- # set +x 00:09:24.546 15:52:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.546 15:52:04 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.546 15:52:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.546 15:52:04 -- common/autotest_common.sh@10 -- # set +x 00:09:24.546 15:52:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.546 15:52:04 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.546 15:52:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.546 15:52:04 -- common/autotest_common.sh@10 -- # set +x 00:09:24.546 [2024-04-26 15:52:04.054191] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.546 15:52:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.546 15:52:04 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:24.546 15:52:04 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:24.546 15:52:04 -- target/connect_disconnect.sh@34 -- # set +x 00:09:27.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.912 15:52:20 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:41.912 15:52:20 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:41.912 15:52:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:41.912 15:52:20 -- nvmf/common.sh@117 -- # sync 00:09:41.912 15:52:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.912 15:52:20 -- nvmf/common.sh@120 -- # set +e 00:09:41.912 15:52:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.912 15:52:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.912 rmmod nvme_tcp 00:09:41.912 rmmod nvme_fabrics 00:09:41.912 rmmod nvme_keyring 00:09:41.912 15:52:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.912 15:52:20 -- nvmf/common.sh@124 -- # set -e 00:09:41.912 15:52:20 -- nvmf/common.sh@125 -- # return 0 00:09:41.912 15:52:20 -- nvmf/common.sh@478 -- # '[' -n 2323155 ']' 00:09:41.912 15:52:20 -- nvmf/common.sh@479 -- # killprocess 2323155 00:09:41.912 15:52:20 -- common/autotest_common.sh@936 -- # '[' -z 2323155 ']' 00:09:41.912 15:52:20 -- common/autotest_common.sh@940 -- # kill -0 2323155 00:09:41.912 15:52:20 -- common/autotest_common.sh@941 -- # uname 00:09:41.912 15:52:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:41.912 15:52:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2323155 00:09:41.912 15:52:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:41.912 15:52:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:41.912 15:52:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2323155' 00:09:41.912 killing process with pid 2323155 00:09:41.912 15:52:20 -- common/autotest_common.sh@955 -- # kill 2323155 00:09:41.912 15:52:20 -- common/autotest_common.sh@960 -- # wait 2323155 00:09:42.902 15:52:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:42.902 15:52:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:42.902 15:52:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:42.902 15:52:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.902 15:52:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.902 15:52:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.902 15:52:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.902 15:52:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.808 15:52:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.808 00:09:44.808 real 0m26.350s 00:09:44.808 user 1m15.041s 00:09:44.808 sys 0m4.978s 00:09:44.808 15:52:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:44.808 15:52:24 -- common/autotest_common.sh@10 -- # set +x 00:09:44.808 ************************************ 00:09:44.808 END TEST nvmf_connect_disconnect 00:09:44.808 ************************************ 00:09:44.808 15:52:24 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:44.808 15:52:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:44.808 15:52:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.808 15:52:24 -- common/autotest_common.sh@10 -- # set +x 00:09:45.067 ************************************ 00:09:45.067 START TEST nvmf_multitarget 00:09:45.067 ************************************ 00:09:45.067 15:52:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:45.067 * Looking for test storage... 00:09:45.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.067 15:52:24 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.067 15:52:24 -- nvmf/common.sh@7 -- # uname -s 00:09:45.067 15:52:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.067 15:52:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.067 15:52:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.067 15:52:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.067 15:52:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.067 15:52:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.067 15:52:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.067 15:52:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.067 15:52:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.067 15:52:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.067 15:52:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.067 15:52:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.067 15:52:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.067 15:52:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.067 15:52:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.067 15:52:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.067 15:52:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.067 15:52:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.067 15:52:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.067 15:52:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.067 15:52:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.067 15:52:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.067 15:52:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.067 15:52:24 -- paths/export.sh@5 -- # export PATH 00:09:45.067 15:52:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.067 15:52:24 -- nvmf/common.sh@47 -- # : 0 00:09:45.067 15:52:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.067 15:52:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.067 15:52:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.067 15:52:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.067 15:52:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.067 15:52:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.067 15:52:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.067 15:52:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.067 15:52:24 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:45.067 15:52:24 -- target/multitarget.sh@15 -- # nvmftestinit 00:09:45.067 15:52:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:45.067 15:52:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.067 15:52:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:45.067 15:52:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:45.067 15:52:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:45.067 15:52:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.067 15:52:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.067 15:52:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.067 15:52:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:45.067 15:52:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:45.067 15:52:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.067 15:52:24 -- common/autotest_common.sh@10 -- # set +x 00:09:50.375 15:52:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:50.375 15:52:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.375 15:52:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.375 15:52:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.375 15:52:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.375 15:52:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.375 15:52:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.375 15:52:29 -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.375 15:52:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.375 15:52:29 -- nvmf/common.sh@296 -- # e810=() 00:09:50.375 15:52:29 -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.375 15:52:29 -- nvmf/common.sh@297 -- # x722=() 00:09:50.375 15:52:29 -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.375 15:52:29 -- nvmf/common.sh@298 -- # mlx=() 00:09:50.375 15:52:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.375 15:52:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.375 15:52:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.375 15:52:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.375 15:52:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.375 15:52:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.375 15:52:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:50.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:50.375 15:52:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.375 15:52:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:50.375 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:50.375 15:52:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.375 15:52:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.375 15:52:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.375 15:52:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:50.375 15:52:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.375 15:52:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:50.375 Found net devices under 0000:86:00.0: cvl_0_0 00:09:50.375 15:52:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.375 15:52:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.375 15:52:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.375 15:52:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:50.375 15:52:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.375 15:52:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:50.375 Found net devices under 0000:86:00.1: cvl_0_1 00:09:50.375 15:52:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.375 15:52:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:50.375 15:52:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:50.375 15:52:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:50.375 15:52:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:50.375 15:52:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.375 15:52:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.375 15:52:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.375 15:52:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.375 15:52:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.375 15:52:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.375 15:52:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.375 15:52:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.375 15:52:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.375 15:52:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.375 15:52:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.375 15:52:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.375 15:52:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.375 15:52:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.375 15:52:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.375 15:52:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.375 15:52:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.375 15:52:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.375 15:52:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.634 15:52:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:09:50.634 00:09:50.634 --- 10.0.0.2 ping statistics --- 00:09:50.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.634 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:50.634 15:52:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:09:50.634 00:09:50.634 --- 10.0.0.1 ping statistics --- 00:09:50.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.634 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:09:50.634 15:52:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.634 15:52:30 -- nvmf/common.sh@411 -- # return 0 00:09:50.634 15:52:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:50.634 15:52:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.634 15:52:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:50.634 15:52:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:50.634 15:52:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.634 15:52:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:50.634 15:52:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:50.634 15:52:30 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:50.634 15:52:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:50.634 15:52:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:50.634 15:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:50.634 15:52:30 -- nvmf/common.sh@470 -- # nvmfpid=2329968 00:09:50.634 15:52:30 -- nvmf/common.sh@471 -- # waitforlisten 2329968 00:09:50.634 15:52:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:50.634 15:52:30 -- common/autotest_common.sh@817 -- # '[' -z 2329968 ']' 00:09:50.634 15:52:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.634 15:52:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:50.634 15:52:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.634 15:52:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:50.634 15:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:50.634 [2024-04-26 15:52:30.197941] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:09:50.634 [2024-04-26 15:52:30.198025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.634 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.634 [2024-04-26 15:52:30.308547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.893 [2024-04-26 15:52:30.546501] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.893 [2024-04-26 15:52:30.546549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.893 [2024-04-26 15:52:30.546560] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.893 [2024-04-26 15:52:30.546571] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.893 [2024-04-26 15:52:30.546580] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.893 [2024-04-26 15:52:30.546659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.893 [2024-04-26 15:52:30.546743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.893 [2024-04-26 15:52:30.546816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.893 [2024-04-26 15:52:30.546824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.461 15:52:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:51.461 15:52:30 -- common/autotest_common.sh@850 -- # return 0 00:09:51.461 15:52:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:51.461 15:52:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:51.461 15:52:30 -- common/autotest_common.sh@10 -- # set +x 00:09:51.461 15:52:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.461 15:52:31 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:51.461 15:52:31 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:51.461 15:52:31 -- target/multitarget.sh@21 -- # jq length 00:09:51.461 15:52:31 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:51.461 15:52:31 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:51.719 "nvmf_tgt_1" 00:09:51.719 15:52:31 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:51.719 "nvmf_tgt_2" 00:09:51.719 15:52:31 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:51.719 15:52:31 -- target/multitarget.sh@28 -- # jq length 00:09:51.978 15:52:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:51.978 15:52:31 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:51.978 true 00:09:51.978 15:52:31 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:51.978 true 00:09:51.978 15:52:31 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:51.978 15:52:31 -- target/multitarget.sh@35 -- # jq length 00:09:52.237 15:52:31 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:52.237 15:52:31 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:52.237 15:52:31 -- target/multitarget.sh@41 -- # nvmftestfini 00:09:52.237 15:52:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:52.237 15:52:31 -- nvmf/common.sh@117 -- # sync 00:09:52.237 15:52:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.237 15:52:31 -- nvmf/common.sh@120 -- # set +e 00:09:52.237 15:52:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.237 15:52:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.237 rmmod nvme_tcp 00:09:52.237 rmmod nvme_fabrics 00:09:52.237 rmmod nvme_keyring 00:09:52.237 15:52:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.237 15:52:31 -- nvmf/common.sh@124 -- # set -e 00:09:52.237 15:52:31 -- nvmf/common.sh@125 -- # return 0 00:09:52.237 15:52:31 -- nvmf/common.sh@478 -- # '[' -n 2329968 ']' 00:09:52.237 15:52:31 -- nvmf/common.sh@479 -- # killprocess 2329968 00:09:52.237 15:52:31 -- common/autotest_common.sh@936 -- # '[' -z 2329968 ']' 00:09:52.237 15:52:31 -- common/autotest_common.sh@940 -- # kill -0 2329968 00:09:52.237 15:52:31 -- common/autotest_common.sh@941 -- # uname 00:09:52.237 15:52:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.237 15:52:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2329968 00:09:52.237 15:52:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:52.237 15:52:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:52.237 15:52:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2329968' 00:09:52.237 killing process with pid 2329968 00:09:52.237 15:52:31 -- common/autotest_common.sh@955 -- # kill 2329968 00:09:52.237 15:52:31 -- common/autotest_common.sh@960 -- # wait 2329968 00:09:53.616 15:52:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:53.616 15:52:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:53.616 15:52:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:53.616 15:52:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:53.616 15:52:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:53.616 15:52:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.616 15:52:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.616 15:52:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.523 15:52:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:55.523 00:09:55.523 real 0m10.647s 00:09:55.523 user 0m11.669s 00:09:55.523 sys 0m4.650s 00:09:55.523 15:52:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.523 15:52:35 -- common/autotest_common.sh@10 -- # set +x 00:09:55.523 ************************************ 00:09:55.523 END TEST nvmf_multitarget 00:09:55.523 ************************************ 00:09:55.782 15:52:35 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:55.782 15:52:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:55.782 15:52:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.782 15:52:35 -- common/autotest_common.sh@10 -- # set +x 00:09:55.782 ************************************ 00:09:55.782 START TEST nvmf_rpc 00:09:55.782 ************************************ 00:09:55.782 15:52:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:55.782 * Looking for test storage... 00:09:55.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.782 15:52:35 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.782 15:52:35 -- nvmf/common.sh@7 -- # uname -s 00:09:55.782 15:52:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.782 15:52:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.782 15:52:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.782 15:52:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.782 15:52:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.782 15:52:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.782 15:52:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.782 15:52:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.782 15:52:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.782 15:52:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.782 15:52:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:55.782 15:52:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:55.782 15:52:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.782 15:52:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.782 15:52:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.782 15:52:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.782 15:52:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.782 15:52:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.782 15:52:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.782 15:52:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.782 15:52:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.782 15:52:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.782 15:52:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.782 15:52:35 -- paths/export.sh@5 -- # export PATH 00:09:55.782 15:52:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.782 15:52:35 -- nvmf/common.sh@47 -- # : 0 00:09:55.782 15:52:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.782 15:52:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.782 15:52:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.782 15:52:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.782 15:52:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.782 15:52:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.782 15:52:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.782 15:52:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.782 15:52:35 -- target/rpc.sh@11 -- # loops=5 00:09:55.782 15:52:35 -- target/rpc.sh@23 -- # nvmftestinit 00:09:55.782 15:52:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:55.782 15:52:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.782 15:52:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:55.782 15:52:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:55.782 15:52:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:55.782 15:52:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.782 15:52:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.782 15:52:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.782 15:52:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:55.782 15:52:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:55.782 15:52:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:55.782 15:52:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.354 15:52:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:02.355 15:52:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.355 15:52:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.355 15:52:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.355 15:52:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.355 15:52:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.355 15:52:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.355 15:52:40 -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.355 15:52:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.355 15:52:40 -- nvmf/common.sh@296 -- # e810=() 00:10:02.355 15:52:40 -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.355 15:52:40 -- nvmf/common.sh@297 -- # x722=() 00:10:02.355 15:52:40 -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.355 15:52:40 -- nvmf/common.sh@298 -- # mlx=() 00:10:02.355 15:52:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.355 15:52:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.355 15:52:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.355 15:52:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.355 15:52:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.355 15:52:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.355 15:52:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:02.355 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:02.355 15:52:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.355 15:52:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:02.355 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:02.355 15:52:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.355 15:52:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.355 15:52:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.355 15:52:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:02.355 15:52:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.355 15:52:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:02.355 Found net devices under 0000:86:00.0: cvl_0_0 00:10:02.355 15:52:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.355 15:52:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.355 15:52:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.355 15:52:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:02.355 15:52:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.355 15:52:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:02.355 Found net devices under 0000:86:00.1: cvl_0_1 00:10:02.355 15:52:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.355 15:52:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:02.355 15:52:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:02.355 15:52:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:02.355 15:52:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:02.355 15:52:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.355 15:52:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.355 15:52:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.355 15:52:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.355 15:52:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.355 15:52:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.355 15:52:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.355 15:52:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.355 15:52:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.355 15:52:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.355 15:52:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.355 15:52:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.355 15:52:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.355 15:52:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.355 15:52:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.355 15:52:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.355 15:52:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.355 15:52:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.355 15:52:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.355 15:52:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:10:02.355 00:10:02.355 --- 10.0.0.2 ping statistics --- 00:10:02.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.355 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:02.355 15:52:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:10:02.355 00:10:02.355 --- 10.0.0.1 ping statistics --- 00:10:02.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.355 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:10:02.355 15:52:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.355 15:52:41 -- nvmf/common.sh@411 -- # return 0 00:10:02.355 15:52:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:02.355 15:52:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.355 15:52:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:02.355 15:52:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:02.355 15:52:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.355 15:52:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:02.355 15:52:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:02.355 15:52:41 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:02.355 15:52:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:02.355 15:52:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:02.355 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:02.355 15:52:41 -- nvmf/common.sh@470 -- # nvmfpid=2333991 00:10:02.355 15:52:41 -- nvmf/common.sh@471 -- # waitforlisten 2333991 00:10:02.355 15:52:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.355 15:52:41 -- common/autotest_common.sh@817 -- # '[' -z 2333991 ']' 00:10:02.355 15:52:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.355 15:52:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:02.355 15:52:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.355 15:52:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:02.355 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:02.355 [2024-04-26 15:52:41.154605] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:10:02.355 [2024-04-26 15:52:41.154688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.355 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.355 [2024-04-26 15:52:41.259962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.355 [2024-04-26 15:52:41.470480] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.355 [2024-04-26 15:52:41.470530] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.356 [2024-04-26 15:52:41.470540] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.356 [2024-04-26 15:52:41.470550] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.356 [2024-04-26 15:52:41.470557] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.356 [2024-04-26 15:52:41.470644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.356 [2024-04-26 15:52:41.470758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.356 [2024-04-26 15:52:41.470829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.356 [2024-04-26 15:52:41.470838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.356 15:52:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:02.356 15:52:41 -- common/autotest_common.sh@850 -- # return 0 00:10:02.356 15:52:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:02.356 15:52:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:02.356 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:02.356 15:52:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.356 15:52:41 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:02.356 15:52:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.356 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:10:02.356 15:52:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.356 15:52:41 -- target/rpc.sh@26 -- # stats='{ 00:10:02.356 "tick_rate": 2300000000, 00:10:02.356 "poll_groups": [ 00:10:02.356 { 00:10:02.356 "name": "nvmf_tgt_poll_group_0", 00:10:02.356 "admin_qpairs": 0, 00:10:02.356 "io_qpairs": 0, 00:10:02.356 "current_admin_qpairs": 0, 00:10:02.356 "current_io_qpairs": 0, 00:10:02.356 "pending_bdev_io": 0, 00:10:02.356 "completed_nvme_io": 0, 00:10:02.356 "transports": [] 00:10:02.356 }, 00:10:02.356 { 00:10:02.356 "name": "nvmf_tgt_poll_group_1", 00:10:02.356 "admin_qpairs": 0, 00:10:02.356 "io_qpairs": 0, 00:10:02.356 "current_admin_qpairs": 0, 00:10:02.356 "current_io_qpairs": 0, 00:10:02.356 "pending_bdev_io": 0, 00:10:02.356 "completed_nvme_io": 0, 00:10:02.356 "transports": [] 00:10:02.356 }, 00:10:02.356 { 00:10:02.356 "name": "nvmf_tgt_poll_group_2", 00:10:02.356 "admin_qpairs": 0, 00:10:02.356 "io_qpairs": 0, 00:10:02.356 "current_admin_qpairs": 0, 00:10:02.356 "current_io_qpairs": 0, 00:10:02.356 "pending_bdev_io": 0, 00:10:02.356 "completed_nvme_io": 0, 00:10:02.356 "transports": [] 00:10:02.356 }, 00:10:02.356 { 00:10:02.356 "name": "nvmf_tgt_poll_group_3", 00:10:02.356 "admin_qpairs": 0, 00:10:02.356 "io_qpairs": 0, 00:10:02.356 "current_admin_qpairs": 0, 00:10:02.356 "current_io_qpairs": 0, 00:10:02.356 "pending_bdev_io": 0, 00:10:02.356 "completed_nvme_io": 0, 00:10:02.356 "transports": [] 00:10:02.356 } 00:10:02.356 ] 00:10:02.356 }' 00:10:02.356 15:52:41 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:02.356 15:52:41 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:02.356 15:52:41 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:02.356 15:52:41 -- target/rpc.sh@15 -- # wc -l 00:10:02.356 15:52:42 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:02.356 15:52:42 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:02.616 15:52:42 -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:02.616 15:52:42 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.616 15:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.616 15:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.616 [2024-04-26 15:52:42.075602] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.616 15:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.616 15:52:42 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:02.616 15:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.616 15:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.616 15:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.616 15:52:42 -- target/rpc.sh@33 -- # stats='{ 00:10:02.616 "tick_rate": 2300000000, 00:10:02.616 "poll_groups": [ 00:10:02.616 { 00:10:02.616 "name": "nvmf_tgt_poll_group_0", 00:10:02.616 "admin_qpairs": 0, 00:10:02.616 "io_qpairs": 0, 00:10:02.616 "current_admin_qpairs": 0, 00:10:02.616 "current_io_qpairs": 0, 00:10:02.616 "pending_bdev_io": 0, 00:10:02.616 "completed_nvme_io": 0, 00:10:02.616 "transports": [ 00:10:02.616 { 00:10:02.616 "trtype": "TCP" 00:10:02.616 } 00:10:02.616 ] 00:10:02.616 }, 00:10:02.616 { 00:10:02.616 "name": "nvmf_tgt_poll_group_1", 00:10:02.616 "admin_qpairs": 0, 00:10:02.616 "io_qpairs": 0, 00:10:02.616 "current_admin_qpairs": 0, 00:10:02.616 "current_io_qpairs": 0, 00:10:02.616 "pending_bdev_io": 0, 00:10:02.616 "completed_nvme_io": 0, 00:10:02.616 "transports": [ 00:10:02.616 { 00:10:02.616 "trtype": "TCP" 00:10:02.616 } 00:10:02.616 ] 00:10:02.616 }, 00:10:02.616 { 00:10:02.616 "name": "nvmf_tgt_poll_group_2", 00:10:02.616 "admin_qpairs": 0, 00:10:02.616 "io_qpairs": 0, 00:10:02.616 "current_admin_qpairs": 0, 00:10:02.616 "current_io_qpairs": 0, 00:10:02.616 "pending_bdev_io": 0, 00:10:02.616 "completed_nvme_io": 0, 00:10:02.616 "transports": [ 00:10:02.616 { 00:10:02.616 "trtype": "TCP" 00:10:02.616 } 00:10:02.616 ] 00:10:02.616 }, 00:10:02.616 { 00:10:02.616 "name": "nvmf_tgt_poll_group_3", 00:10:02.616 "admin_qpairs": 0, 00:10:02.616 "io_qpairs": 0, 00:10:02.616 "current_admin_qpairs": 0, 00:10:02.616 "current_io_qpairs": 0, 00:10:02.616 "pending_bdev_io": 0, 00:10:02.616 "completed_nvme_io": 0, 00:10:02.616 "transports": [ 00:10:02.616 { 00:10:02.616 "trtype": "TCP" 00:10:02.616 } 00:10:02.616 ] 00:10:02.616 } 00:10:02.616 ] 00:10:02.616 }' 00:10:02.616 15:52:42 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:02.616 15:52:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:02.616 15:52:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:02.616 15:52:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:02.616 15:52:42 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:02.616 15:52:42 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:02.616 15:52:42 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:02.616 15:52:42 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:02.616 15:52:42 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:02.616 15:52:42 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:02.616 15:52:42 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:02.616 15:52:42 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:02.616 15:52:42 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:02.616 15:52:42 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:02.616 15:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.616 15:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.616 Malloc1 00:10:02.616 15:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.616 15:52:42 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:02.616 15:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.616 15:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.876 15:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.876 15:52:42 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.876 15:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.876 15:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.876 15:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.876 15:52:42 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:02.876 15:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.876 15:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.876 15:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.876 15:52:42 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.876 15:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.876 15:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.876 [2024-04-26 15:52:42.321825] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.876 15:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.876 15:52:42 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:02.876 15:52:42 -- common/autotest_common.sh@638 -- # local es=0 00:10:02.876 15:52:42 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:02.876 15:52:42 -- common/autotest_common.sh@626 -- # local arg=nvme 00:10:02.876 15:52:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:02.876 15:52:42 -- common/autotest_common.sh@630 -- # type -t nvme 00:10:02.876 15:52:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:02.876 15:52:42 -- common/autotest_common.sh@632 -- # type -P nvme 00:10:02.876 15:52:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:02.876 15:52:42 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:10:02.876 15:52:42 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:10:02.876 15:52:42 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:10:02.876 [2024-04-26 15:52:42.351265] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:10:02.876 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:02.876 could not add new controller: failed to write to nvme-fabrics device 00:10:02.876 15:52:42 -- common/autotest_common.sh@641 -- # es=1 00:10:02.876 15:52:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:02.876 15:52:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:02.876 15:52:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:02.876 15:52:42 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:02.876 15:52:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:02.876 15:52:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.876 15:52:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:02.876 15:52:42 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.253 15:52:43 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.253 15:52:43 -- common/autotest_common.sh@1184 -- # local i=0 00:10:04.253 15:52:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.253 15:52:43 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:04.253 15:52:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:06.160 15:52:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:06.160 15:52:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:06.160 15:52:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.160 15:52:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:06.160 15:52:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.160 15:52:45 -- common/autotest_common.sh@1194 -- # return 0 00:10:06.160 15:52:45 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.160 15:52:45 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.160 15:52:45 -- common/autotest_common.sh@1205 -- # local i=0 00:10:06.160 15:52:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:06.160 15:52:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.420 15:52:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:06.420 15:52:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.420 15:52:45 -- common/autotest_common.sh@1217 -- # return 0 00:10:06.420 15:52:45 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.420 15:52:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.420 15:52:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.420 15:52:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.420 15:52:45 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.420 15:52:45 -- common/autotest_common.sh@638 -- # local es=0 00:10:06.420 15:52:45 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.420 15:52:45 -- common/autotest_common.sh@626 -- # local arg=nvme 00:10:06.420 15:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:06.420 15:52:45 -- common/autotest_common.sh@630 -- # type -t nvme 00:10:06.420 15:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:06.420 15:52:45 -- common/autotest_common.sh@632 -- # type -P nvme 00:10:06.420 15:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:06.420 15:52:45 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:10:06.420 15:52:45 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:10:06.420 15:52:45 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.420 [2024-04-26 15:52:45.885495] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:10:06.420 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:06.420 could not add new controller: failed to write to nvme-fabrics device 00:10:06.420 15:52:45 -- common/autotest_common.sh@641 -- # es=1 00:10:06.420 15:52:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:06.420 15:52:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:06.420 15:52:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:06.420 15:52:45 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:06.420 15:52:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.420 15:52:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.420 15:52:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.420 15:52:45 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.359 15:52:47 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.359 15:52:47 -- common/autotest_common.sh@1184 -- # local i=0 00:10:07.359 15:52:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.359 15:52:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:07.359 15:52:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:09.897 15:52:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:09.897 15:52:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:09.897 15:52:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.897 15:52:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:09.897 15:52:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.897 15:52:49 -- common/autotest_common.sh@1194 -- # return 0 00:10:09.897 15:52:49 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.897 15:52:49 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.897 15:52:49 -- common/autotest_common.sh@1205 -- # local i=0 00:10:09.897 15:52:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:09.897 15:52:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.897 15:52:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:09.897 15:52:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.897 15:52:49 -- common/autotest_common.sh@1217 -- # return 0 00:10:09.897 15:52:49 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.897 15:52:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.897 15:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:09.897 15:52:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.897 15:52:49 -- target/rpc.sh@81 -- # seq 1 5 00:10:09.897 15:52:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:09.897 15:52:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.897 15:52:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.897 15:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:09.897 15:52:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.897 15:52:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.897 15:52:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.897 15:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:09.897 [2024-04-26 15:52:49.368433] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.897 15:52:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.897 15:52:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:09.897 15:52:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.897 15:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:09.897 15:52:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.897 15:52:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.897 15:52:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.897 15:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:09.897 15:52:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.897 15:52:49 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.836 15:52:50 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:10.836 15:52:50 -- common/autotest_common.sh@1184 -- # local i=0 00:10:10.836 15:52:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.836 15:52:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:10.836 15:52:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:13.371 15:52:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:13.371 15:52:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:13.371 15:52:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.371 15:52:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:13.371 15:52:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.371 15:52:52 -- common/autotest_common.sh@1194 -- # return 0 00:10:13.371 15:52:52 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.371 15:52:52 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.371 15:52:52 -- common/autotest_common.sh@1205 -- # local i=0 00:10:13.372 15:52:52 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:13.372 15:52:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.372 15:52:52 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:13.372 15:52:52 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.372 15:52:52 -- common/autotest_common.sh@1217 -- # return 0 00:10:13.372 15:52:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.372 15:52:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.372 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.372 15:52:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.372 15:52:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.372 15:52:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.372 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.372 15:52:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.372 15:52:52 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:13.372 15:52:52 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:13.372 15:52:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.372 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.372 15:52:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.372 15:52:52 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.372 15:52:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.372 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.372 [2024-04-26 15:52:52.859019] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.372 15:52:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.372 15:52:52 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:13.372 15:52:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.372 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.372 15:52:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.372 15:52:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:13.372 15:52:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:13.372 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:10:13.372 15:52:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:13.372 15:52:52 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.750 15:52:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.750 15:52:53 -- common/autotest_common.sh@1184 -- # local i=0 00:10:14.750 15:52:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.750 15:52:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:14.750 15:52:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:16.657 15:52:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:16.657 15:52:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:16.657 15:52:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.657 15:52:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:16.657 15:52:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.657 15:52:56 -- common/autotest_common.sh@1194 -- # return 0 00:10:16.657 15:52:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.657 15:52:56 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.657 15:52:56 -- common/autotest_common.sh@1205 -- # local i=0 00:10:16.657 15:52:56 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:16.657 15:52:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.657 15:52:56 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:16.657 15:52:56 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.657 15:52:56 -- common/autotest_common.sh@1217 -- # return 0 00:10:16.657 15:52:56 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.657 15:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.657 15:52:56 -- common/autotest_common.sh@10 -- # set +x 00:10:16.657 15:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.657 15:52:56 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.657 15:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.657 15:52:56 -- common/autotest_common.sh@10 -- # set +x 00:10:16.657 15:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.657 15:52:56 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:16.657 15:52:56 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.657 15:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.657 15:52:56 -- common/autotest_common.sh@10 -- # set +x 00:10:16.657 15:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.657 15:52:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.657 15:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.657 15:52:56 -- common/autotest_common.sh@10 -- # set +x 00:10:16.657 [2024-04-26 15:52:56.319651] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.657 15:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.657 15:52:56 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:16.657 15:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.657 15:52:56 -- common/autotest_common.sh@10 -- # set +x 00:10:16.657 15:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.657 15:52:56 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.657 15:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:16.657 15:52:56 -- common/autotest_common.sh@10 -- # set +x 00:10:16.657 15:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:16.917 15:52:56 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.853 15:52:57 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.853 15:52:57 -- common/autotest_common.sh@1184 -- # local i=0 00:10:17.853 15:52:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.853 15:52:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:17.853 15:52:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:20.391 15:52:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:20.391 15:52:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:20.391 15:52:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.391 15:52:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:20.391 15:52:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.391 15:52:59 -- common/autotest_common.sh@1194 -- # return 0 00:10:20.391 15:52:59 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.391 15:52:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.391 15:52:59 -- common/autotest_common.sh@1205 -- # local i=0 00:10:20.391 15:52:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:20.391 15:52:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.391 15:52:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:20.391 15:52:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.391 15:52:59 -- common/autotest_common.sh@1217 -- # return 0 00:10:20.391 15:52:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.391 15:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.391 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 15:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.391 15:52:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.391 15:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.391 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 15:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.391 15:52:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:20.391 15:52:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.391 15:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.391 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 15:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.391 15:52:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.391 15:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.391 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:10:20.391 [2024-04-26 15:52:59.853929] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.392 15:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.392 15:52:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:20.392 15:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.392 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 15:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.392 15:52:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.392 15:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.392 15:52:59 -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 15:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.392 15:52:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.329 15:53:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:21.329 15:53:00 -- common/autotest_common.sh@1184 -- # local i=0 00:10:21.329 15:53:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.329 15:53:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:21.329 15:53:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:23.320 15:53:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:23.320 15:53:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:23.320 15:53:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.579 15:53:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:23.579 15:53:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.579 15:53:03 -- common/autotest_common.sh@1194 -- # return 0 00:10:23.579 15:53:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.839 15:53:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.839 15:53:03 -- common/autotest_common.sh@1205 -- # local i=0 00:10:23.839 15:53:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:23.839 15:53:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.839 15:53:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:23.839 15:53:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.839 15:53:03 -- common/autotest_common.sh@1217 -- # return 0 00:10:23.839 15:53:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:23.839 15:53:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.839 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:23.839 15:53:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.839 15:53:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.839 15:53:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.839 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:23.839 15:53:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.839 15:53:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:23.839 15:53:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:23.839 15:53:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.839 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:23.839 15:53:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.839 15:53:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.839 15:53:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.839 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:23.839 [2024-04-26 15:53:03.347084] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.839 15:53:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.839 15:53:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:23.839 15:53:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.839 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:23.839 15:53:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.839 15:53:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:23.839 15:53:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.839 15:53:03 -- common/autotest_common.sh@10 -- # set +x 00:10:23.839 15:53:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.839 15:53:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.826 15:53:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.826 15:53:04 -- common/autotest_common.sh@1184 -- # local i=0 00:10:24.826 15:53:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.826 15:53:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:24.826 15:53:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:27.361 15:53:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:27.361 15:53:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:27.361 15:53:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.361 15:53:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:27.361 15:53:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.361 15:53:06 -- common/autotest_common.sh@1194 -- # return 0 00:10:27.361 15:53:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.361 15:53:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.361 15:53:06 -- common/autotest_common.sh@1205 -- # local i=0 00:10:27.361 15:53:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:27.361 15:53:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.361 15:53:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:27.361 15:53:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.361 15:53:06 -- common/autotest_common.sh@1217 -- # return 0 00:10:27.361 15:53:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@99 -- # seq 1 5 00:10:27.361 15:53:06 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.361 15:53:06 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 [2024-04-26 15:53:06.856377] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.361 15:53:06 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 [2024-04-26 15:53:06.904519] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.361 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.361 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.361 15:53:06 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.361 15:53:06 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.361 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:06 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.362 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 [2024-04-26 15:53:06.952661] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.362 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:06 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.362 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:06 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.362 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:06 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.362 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:06 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.362 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:06 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.362 15:53:06 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.362 15:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:06 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:07 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.362 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 [2024-04-26 15:53:07.004867] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.362 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:07 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.362 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:07 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.362 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:07 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.362 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:07 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.362 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.362 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.362 15:53:07 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:27.362 15:53:07 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.362 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.362 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.622 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.622 15:53:07 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.622 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.622 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.622 [2024-04-26 15:53:07.053035] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.622 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.622 15:53:07 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.622 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.622 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.622 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.622 15:53:07 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.622 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.622 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.622 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.622 15:53:07 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.622 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.622 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.622 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.622 15:53:07 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.622 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.622 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.622 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.623 15:53:07 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:27.623 15:53:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.623 15:53:07 -- common/autotest_common.sh@10 -- # set +x 00:10:27.623 15:53:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.623 15:53:07 -- target/rpc.sh@110 -- # stats='{ 00:10:27.623 "tick_rate": 2300000000, 00:10:27.623 "poll_groups": [ 00:10:27.623 { 00:10:27.623 "name": "nvmf_tgt_poll_group_0", 00:10:27.623 "admin_qpairs": 2, 00:10:27.623 "io_qpairs": 168, 00:10:27.623 "current_admin_qpairs": 0, 00:10:27.623 "current_io_qpairs": 0, 00:10:27.623 "pending_bdev_io": 0, 00:10:27.623 "completed_nvme_io": 267, 00:10:27.623 "transports": [ 00:10:27.623 { 00:10:27.623 "trtype": "TCP" 00:10:27.623 } 00:10:27.623 ] 00:10:27.623 }, 00:10:27.623 { 00:10:27.623 "name": "nvmf_tgt_poll_group_1", 00:10:27.623 "admin_qpairs": 2, 00:10:27.623 "io_qpairs": 168, 00:10:27.623 "current_admin_qpairs": 0, 00:10:27.623 "current_io_qpairs": 0, 00:10:27.623 "pending_bdev_io": 0, 00:10:27.623 "completed_nvme_io": 267, 00:10:27.623 "transports": [ 00:10:27.623 { 00:10:27.623 "trtype": "TCP" 00:10:27.623 } 00:10:27.623 ] 00:10:27.623 }, 00:10:27.623 { 00:10:27.623 "name": "nvmf_tgt_poll_group_2", 00:10:27.623 "admin_qpairs": 1, 00:10:27.623 "io_qpairs": 168, 00:10:27.623 "current_admin_qpairs": 0, 00:10:27.623 "current_io_qpairs": 0, 00:10:27.623 "pending_bdev_io": 0, 00:10:27.623 "completed_nvme_io": 268, 00:10:27.623 "transports": [ 00:10:27.623 { 00:10:27.623 "trtype": "TCP" 00:10:27.623 } 00:10:27.623 ] 00:10:27.623 }, 00:10:27.623 { 00:10:27.623 "name": "nvmf_tgt_poll_group_3", 00:10:27.623 "admin_qpairs": 2, 00:10:27.623 "io_qpairs": 168, 00:10:27.623 "current_admin_qpairs": 0, 00:10:27.623 "current_io_qpairs": 0, 00:10:27.623 "pending_bdev_io": 0, 00:10:27.623 "completed_nvme_io": 220, 00:10:27.623 "transports": [ 00:10:27.623 { 00:10:27.623 "trtype": "TCP" 00:10:27.623 } 00:10:27.623 ] 00:10:27.623 } 00:10:27.623 ] 00:10:27.623 }' 00:10:27.623 15:53:07 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:27.623 15:53:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:27.623 15:53:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:27.623 15:53:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:27.623 15:53:07 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:27.623 15:53:07 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:27.623 15:53:07 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:27.623 15:53:07 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:27.623 15:53:07 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:27.623 15:53:07 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:10:27.623 15:53:07 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:27.623 15:53:07 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:27.623 15:53:07 -- target/rpc.sh@123 -- # nvmftestfini 00:10:27.623 15:53:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:27.623 15:53:07 -- nvmf/common.sh@117 -- # sync 00:10:27.623 15:53:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.623 15:53:07 -- nvmf/common.sh@120 -- # set +e 00:10:27.623 15:53:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.623 15:53:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.623 rmmod nvme_tcp 00:10:27.623 rmmod nvme_fabrics 00:10:27.623 rmmod nvme_keyring 00:10:27.623 15:53:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.623 15:53:07 -- nvmf/common.sh@124 -- # set -e 00:10:27.623 15:53:07 -- nvmf/common.sh@125 -- # return 0 00:10:27.623 15:53:07 -- nvmf/common.sh@478 -- # '[' -n 2333991 ']' 00:10:27.623 15:53:07 -- nvmf/common.sh@479 -- # killprocess 2333991 00:10:27.623 15:53:07 -- common/autotest_common.sh@936 -- # '[' -z 2333991 ']' 00:10:27.623 15:53:07 -- common/autotest_common.sh@940 -- # kill -0 2333991 00:10:27.623 15:53:07 -- common/autotest_common.sh@941 -- # uname 00:10:27.623 15:53:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:27.623 15:53:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2333991 00:10:27.623 15:53:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:27.623 15:53:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:27.623 15:53:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2333991' 00:10:27.623 killing process with pid 2333991 00:10:27.623 15:53:07 -- common/autotest_common.sh@955 -- # kill 2333991 00:10:27.623 15:53:07 -- common/autotest_common.sh@960 -- # wait 2333991 00:10:29.533 15:53:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:29.533 15:53:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:29.533 15:53:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:29.534 15:53:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.534 15:53:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.534 15:53:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.534 15:53:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.534 15:53:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.444 15:53:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:31.444 00:10:31.444 real 0m35.519s 00:10:31.444 user 1m48.998s 00:10:31.444 sys 0m6.222s 00:10:31.444 15:53:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:31.444 15:53:10 -- common/autotest_common.sh@10 -- # set +x 00:10:31.444 ************************************ 00:10:31.444 END TEST nvmf_rpc 00:10:31.444 ************************************ 00:10:31.444 15:53:10 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:31.444 15:53:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:31.444 15:53:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:31.444 15:53:10 -- common/autotest_common.sh@10 -- # set +x 00:10:31.444 ************************************ 00:10:31.444 START TEST nvmf_invalid 00:10:31.444 ************************************ 00:10:31.444 15:53:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:31.444 * Looking for test storage... 00:10:31.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.704 15:53:11 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.704 15:53:11 -- nvmf/common.sh@7 -- # uname -s 00:10:31.704 15:53:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.704 15:53:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.704 15:53:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.704 15:53:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.704 15:53:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.704 15:53:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.704 15:53:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.704 15:53:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.704 15:53:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.704 15:53:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.704 15:53:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:31.704 15:53:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:31.704 15:53:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.704 15:53:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.704 15:53:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.704 15:53:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.704 15:53:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.704 15:53:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.704 15:53:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.704 15:53:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.704 15:53:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.704 15:53:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.704 15:53:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.704 15:53:11 -- paths/export.sh@5 -- # export PATH 00:10:31.704 15:53:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.704 15:53:11 -- nvmf/common.sh@47 -- # : 0 00:10:31.704 15:53:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.704 15:53:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.704 15:53:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.704 15:53:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.704 15:53:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.704 15:53:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.704 15:53:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.704 15:53:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.704 15:53:11 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:31.704 15:53:11 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.704 15:53:11 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:31.704 15:53:11 -- target/invalid.sh@14 -- # target=foobar 00:10:31.704 15:53:11 -- target/invalid.sh@16 -- # RANDOM=0 00:10:31.704 15:53:11 -- target/invalid.sh@34 -- # nvmftestinit 00:10:31.704 15:53:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:31.704 15:53:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.704 15:53:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:31.704 15:53:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:31.704 15:53:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:31.704 15:53:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.704 15:53:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.704 15:53:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.704 15:53:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:31.704 15:53:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:31.704 15:53:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.704 15:53:11 -- common/autotest_common.sh@10 -- # set +x 00:10:36.980 15:53:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:36.980 15:53:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.980 15:53:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.980 15:53:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.980 15:53:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.980 15:53:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.980 15:53:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.980 15:53:16 -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.980 15:53:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.980 15:53:16 -- nvmf/common.sh@296 -- # e810=() 00:10:36.980 15:53:16 -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.980 15:53:16 -- nvmf/common.sh@297 -- # x722=() 00:10:36.980 15:53:16 -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.980 15:53:16 -- nvmf/common.sh@298 -- # mlx=() 00:10:36.980 15:53:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.980 15:53:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.980 15:53:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.980 15:53:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.981 15:53:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.981 15:53:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.981 15:53:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.981 15:53:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.981 15:53:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:36.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:36.981 15:53:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.981 15:53:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:36.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:36.981 15:53:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.981 15:53:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.981 15:53:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.981 15:53:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:36.981 15:53:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.981 15:53:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:36.981 Found net devices under 0000:86:00.0: cvl_0_0 00:10:36.981 15:53:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.981 15:53:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.981 15:53:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.981 15:53:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:36.981 15:53:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.981 15:53:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:36.981 Found net devices under 0000:86:00.1: cvl_0_1 00:10:36.981 15:53:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.981 15:53:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:36.981 15:53:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:36.981 15:53:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:36.981 15:53:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.981 15:53:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.981 15:53:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.981 15:53:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.981 15:53:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.981 15:53:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.981 15:53:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.981 15:53:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.981 15:53:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.981 15:53:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.981 15:53:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.981 15:53:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.981 15:53:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.981 15:53:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.981 15:53:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.981 15:53:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.981 15:53:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.981 15:53:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.981 15:53:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.981 15:53:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:10:36.981 00:10:36.981 --- 10.0.0.2 ping statistics --- 00:10:36.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.981 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:36.981 15:53:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:10:36.981 00:10:36.981 --- 10.0.0.1 ping statistics --- 00:10:36.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.981 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:10:36.981 15:53:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.981 15:53:16 -- nvmf/common.sh@411 -- # return 0 00:10:36.981 15:53:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:36.981 15:53:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.981 15:53:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:36.981 15:53:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.981 15:53:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:36.981 15:53:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:37.242 15:53:16 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:37.242 15:53:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:37.242 15:53:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:37.242 15:53:16 -- common/autotest_common.sh@10 -- # set +x 00:10:37.242 15:53:16 -- nvmf/common.sh@470 -- # nvmfpid=2342062 00:10:37.242 15:53:16 -- nvmf/common.sh@471 -- # waitforlisten 2342062 00:10:37.242 15:53:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.242 15:53:16 -- common/autotest_common.sh@817 -- # '[' -z 2342062 ']' 00:10:37.242 15:53:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.242 15:53:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:37.242 15:53:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.242 15:53:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:37.242 15:53:16 -- common/autotest_common.sh@10 -- # set +x 00:10:37.242 [2024-04-26 15:53:16.773007] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:10:37.242 [2024-04-26 15:53:16.773095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.242 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.242 [2024-04-26 15:53:16.882642] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.502 [2024-04-26 15:53:17.108781] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.503 [2024-04-26 15:53:17.108827] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.503 [2024-04-26 15:53:17.108837] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.503 [2024-04-26 15:53:17.108848] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.503 [2024-04-26 15:53:17.108855] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.503 [2024-04-26 15:53:17.108928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.503 [2024-04-26 15:53:17.109007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.503 [2024-04-26 15:53:17.109074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.503 [2024-04-26 15:53:17.109090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.071 15:53:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:38.071 15:53:17 -- common/autotest_common.sh@850 -- # return 0 00:10:38.071 15:53:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:38.071 15:53:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:38.071 15:53:17 -- common/autotest_common.sh@10 -- # set +x 00:10:38.071 15:53:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.071 15:53:17 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:38.071 15:53:17 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8015 00:10:38.071 [2024-04-26 15:53:17.747183] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:38.330 15:53:17 -- target/invalid.sh@40 -- # out='request: 00:10:38.330 { 00:10:38.330 "nqn": "nqn.2016-06.io.spdk:cnode8015", 00:10:38.330 "tgt_name": "foobar", 00:10:38.330 "method": "nvmf_create_subsystem", 00:10:38.330 "req_id": 1 00:10:38.330 } 00:10:38.330 Got JSON-RPC error response 00:10:38.330 response: 00:10:38.330 { 00:10:38.330 "code": -32603, 00:10:38.330 "message": "Unable to find target foobar" 00:10:38.330 }' 00:10:38.330 15:53:17 -- target/invalid.sh@41 -- # [[ request: 00:10:38.330 { 00:10:38.330 "nqn": "nqn.2016-06.io.spdk:cnode8015", 00:10:38.330 "tgt_name": "foobar", 00:10:38.330 "method": "nvmf_create_subsystem", 00:10:38.330 "req_id": 1 00:10:38.330 } 00:10:38.330 Got JSON-RPC error response 00:10:38.330 response: 00:10:38.330 { 00:10:38.330 "code": -32603, 00:10:38.330 "message": "Unable to find target foobar" 00:10:38.330 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:38.330 15:53:17 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:38.330 15:53:17 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20144 00:10:38.330 [2024-04-26 15:53:17.935913] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20144: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:38.330 15:53:17 -- target/invalid.sh@45 -- # out='request: 00:10:38.330 { 00:10:38.330 "nqn": "nqn.2016-06.io.spdk:cnode20144", 00:10:38.330 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:38.330 "method": "nvmf_create_subsystem", 00:10:38.330 "req_id": 1 00:10:38.330 } 00:10:38.330 Got JSON-RPC error response 00:10:38.330 response: 00:10:38.330 { 00:10:38.330 "code": -32602, 00:10:38.330 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:38.330 }' 00:10:38.330 15:53:17 -- target/invalid.sh@46 -- # [[ request: 00:10:38.330 { 00:10:38.330 "nqn": "nqn.2016-06.io.spdk:cnode20144", 00:10:38.330 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:38.330 "method": "nvmf_create_subsystem", 00:10:38.330 "req_id": 1 00:10:38.330 } 00:10:38.330 Got JSON-RPC error response 00:10:38.330 response: 00:10:38.330 { 00:10:38.330 "code": -32602, 00:10:38.330 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:38.330 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:38.330 15:53:17 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:38.330 15:53:17 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12389 00:10:38.590 [2024-04-26 15:53:18.120516] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12389: invalid model number 'SPDK_Controller' 00:10:38.590 15:53:18 -- target/invalid.sh@50 -- # out='request: 00:10:38.590 { 00:10:38.590 "nqn": "nqn.2016-06.io.spdk:cnode12389", 00:10:38.590 "model_number": "SPDK_Controller\u001f", 00:10:38.590 "method": "nvmf_create_subsystem", 00:10:38.590 "req_id": 1 00:10:38.590 } 00:10:38.590 Got JSON-RPC error response 00:10:38.590 response: 00:10:38.590 { 00:10:38.590 "code": -32602, 00:10:38.590 "message": "Invalid MN SPDK_Controller\u001f" 00:10:38.590 }' 00:10:38.590 15:53:18 -- target/invalid.sh@51 -- # [[ request: 00:10:38.590 { 00:10:38.590 "nqn": "nqn.2016-06.io.spdk:cnode12389", 00:10:38.590 "model_number": "SPDK_Controller\u001f", 00:10:38.590 "method": "nvmf_create_subsystem", 00:10:38.590 "req_id": 1 00:10:38.590 } 00:10:38.590 Got JSON-RPC error response 00:10:38.590 response: 00:10:38.590 { 00:10:38.590 "code": -32602, 00:10:38.590 "message": "Invalid MN SPDK_Controller\u001f" 00:10:38.590 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:38.590 15:53:18 -- target/invalid.sh@54 -- # gen_random_s 21 00:10:38.590 15:53:18 -- target/invalid.sh@19 -- # local length=21 ll 00:10:38.590 15:53:18 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:38.590 15:53:18 -- target/invalid.sh@21 -- # local chars 00:10:38.590 15:53:18 -- target/invalid.sh@22 -- # local string 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 45 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=- 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 110 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=n 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 90 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=Z 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 44 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=, 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 62 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+='>' 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 39 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=\' 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 116 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=t 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 91 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+='[' 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 103 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=g 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 103 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=g 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 50 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=2 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 122 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=z 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.590 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # printf %x 98 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:38.590 15:53:18 -- target/invalid.sh@25 -- # string+=b 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # printf %x 51 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # string+=3 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # printf %x 72 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # string+=H 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # printf %x 71 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # string+=G 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # printf %x 62 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # string+='>' 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # printf %x 100 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # string+=d 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # printf %x 93 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # string+=']' 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # printf %x 42 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:38.591 15:53:18 -- target/invalid.sh@25 -- # string+='*' 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.591 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 122 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=z 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@28 -- # [[ - == \- ]] 00:10:38.851 15:53:18 -- target/invalid.sh@29 -- # string='\-nZ,>'\''t[gg2zb3HG>d]*z' 00:10:38.851 15:53:18 -- target/invalid.sh@31 -- # echo '\-nZ,>'\''t[gg2zb3HG>d]*z' 00:10:38.851 15:53:18 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-nZ,>'\''t[gg2zb3HG>d]*z' nqn.2016-06.io.spdk:cnode5284 00:10:38.851 [2024-04-26 15:53:18.433602] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5284: invalid serial number '\-nZ,>'t[gg2zb3HG>d]*z' 00:10:38.851 15:53:18 -- target/invalid.sh@54 -- # out='request: 00:10:38.851 { 00:10:38.851 "nqn": "nqn.2016-06.io.spdk:cnode5284", 00:10:38.851 "serial_number": "\\-nZ,>'\''t[gg2zb3HG>d]*z", 00:10:38.851 "method": "nvmf_create_subsystem", 00:10:38.851 "req_id": 1 00:10:38.851 } 00:10:38.851 Got JSON-RPC error response 00:10:38.851 response: 00:10:38.851 { 00:10:38.851 "code": -32602, 00:10:38.851 "message": "Invalid SN \\-nZ,>'\''t[gg2zb3HG>d]*z" 00:10:38.851 }' 00:10:38.851 15:53:18 -- target/invalid.sh@55 -- # [[ request: 00:10:38.851 { 00:10:38.851 "nqn": "nqn.2016-06.io.spdk:cnode5284", 00:10:38.851 "serial_number": "\\-nZ,>'t[gg2zb3HG>d]*z", 00:10:38.851 "method": "nvmf_create_subsystem", 00:10:38.851 "req_id": 1 00:10:38.851 } 00:10:38.851 Got JSON-RPC error response 00:10:38.851 response: 00:10:38.851 { 00:10:38.851 "code": -32602, 00:10:38.851 "message": "Invalid SN \\-nZ,>'t[gg2zb3HG>d]*z" 00:10:38.851 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:38.851 15:53:18 -- target/invalid.sh@58 -- # gen_random_s 41 00:10:38.851 15:53:18 -- target/invalid.sh@19 -- # local length=41 ll 00:10:38.851 15:53:18 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:38.851 15:53:18 -- target/invalid.sh@21 -- # local chars 00:10:38.851 15:53:18 -- target/invalid.sh@22 -- # local string 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 81 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=Q 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 69 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=E 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 38 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+='&' 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 86 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=V 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 85 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=U 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 65 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=A 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 63 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+='?' 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 72 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=H 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 125 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+='}' 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 74 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=J 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 72 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=H 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # printf %x 114 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:38.851 15:53:18 -- target/invalid.sh@25 -- # string+=r 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.851 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # printf %x 60 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # string+='<' 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # printf %x 46 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # string+=. 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # printf %x 109 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # string+=m 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # printf %x 94 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # string+='^' 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # printf %x 107 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # string+=k 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # printf %x 49 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:39.111 15:53:18 -- target/invalid.sh@25 -- # string+=1 00:10:39.111 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 44 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=, 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 72 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=H 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 62 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+='>' 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 121 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=y 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 93 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=']' 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 62 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+='>' 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 87 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=W 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 73 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=I 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 36 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+='$' 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 107 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=k 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 94 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+='^' 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 119 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=w 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 50 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=2 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 45 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=- 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 127 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=$'\177' 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 68 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=D 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 127 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=$'\177' 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 34 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+='"' 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 52 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=4 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 84 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=T 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 76 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=L 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 47 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=/ 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # printf %x 97 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:39.112 15:53:18 -- target/invalid.sh@25 -- # string+=a 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:39.112 15:53:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:39.112 15:53:18 -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:10:39.112 15:53:18 -- target/invalid.sh@31 -- # echo 'QE&VUA?H}JHr<.m^k1,H>y]>WI$k^w2-D"4TL/a' 00:10:39.112 15:53:18 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'QE&VUA?H}JHr<.m^k1,H>y]>WI$k^w2-D"4TL/a' nqn.2016-06.io.spdk:cnode19866 00:10:39.372 [2024-04-26 15:53:18.859067] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19866: invalid model number 'QE&VUA?H}JHr<.m^k1,H>y]>WI$k^w2-D"4TL/a' 00:10:39.372 15:53:18 -- target/invalid.sh@58 -- # out='request: 00:10:39.372 { 00:10:39.372 "nqn": "nqn.2016-06.io.spdk:cnode19866", 00:10:39.372 "model_number": "QE&VUA?H}JHr<.m^k1,H>y]>WI$k^w2-\u007fD\u007f\"4TL/a", 00:10:39.372 "method": "nvmf_create_subsystem", 00:10:39.372 "req_id": 1 00:10:39.372 } 00:10:39.372 Got JSON-RPC error response 00:10:39.372 response: 00:10:39.372 { 00:10:39.372 "code": -32602, 00:10:39.372 "message": "Invalid MN QE&VUA?H}JHr<.m^k1,H>y]>WI$k^w2-\u007fD\u007f\"4TL/a" 00:10:39.372 }' 00:10:39.372 15:53:18 -- target/invalid.sh@59 -- # [[ request: 00:10:39.372 { 00:10:39.372 "nqn": "nqn.2016-06.io.spdk:cnode19866", 00:10:39.372 "model_number": "QE&VUA?H}JHr<.m^k1,H>y]>WI$k^w2-\u007fD\u007f\"4TL/a", 00:10:39.372 "method": "nvmf_create_subsystem", 00:10:39.372 "req_id": 1 00:10:39.372 } 00:10:39.372 Got JSON-RPC error response 00:10:39.372 response: 00:10:39.372 { 00:10:39.372 "code": -32602, 00:10:39.372 "message": "Invalid MN QE&VUA?H}JHr<.m^k1,H>y]>WI$k^w2-\u007fD\u007f\"4TL/a" 00:10:39.372 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:39.372 15:53:18 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:39.372 [2024-04-26 15:53:19.047792] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.631 15:53:19 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:39.631 15:53:19 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:39.631 15:53:19 -- target/invalid.sh@67 -- # echo '' 00:10:39.631 15:53:19 -- target/invalid.sh@67 -- # head -n 1 00:10:39.631 15:53:19 -- target/invalid.sh@67 -- # IP= 00:10:39.631 15:53:19 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:39.891 [2024-04-26 15:53:19.421055] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:39.891 15:53:19 -- target/invalid.sh@69 -- # out='request: 00:10:39.891 { 00:10:39.891 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:39.891 "listen_address": { 00:10:39.891 "trtype": "tcp", 00:10:39.891 "traddr": "", 00:10:39.891 "trsvcid": "4421" 00:10:39.891 }, 00:10:39.891 "method": "nvmf_subsystem_remove_listener", 00:10:39.891 "req_id": 1 00:10:39.891 } 00:10:39.891 Got JSON-RPC error response 00:10:39.891 response: 00:10:39.891 { 00:10:39.891 "code": -32602, 00:10:39.891 "message": "Invalid parameters" 00:10:39.891 }' 00:10:39.891 15:53:19 -- target/invalid.sh@70 -- # [[ request: 00:10:39.891 { 00:10:39.891 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:39.891 "listen_address": { 00:10:39.891 "trtype": "tcp", 00:10:39.891 "traddr": "", 00:10:39.891 "trsvcid": "4421" 00:10:39.891 }, 00:10:39.891 "method": "nvmf_subsystem_remove_listener", 00:10:39.891 "req_id": 1 00:10:39.891 } 00:10:39.891 Got JSON-RPC error response 00:10:39.891 response: 00:10:39.891 { 00:10:39.891 "code": -32602, 00:10:39.891 "message": "Invalid parameters" 00:10:39.891 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:39.891 15:53:19 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27761 -i 0 00:10:40.151 [2024-04-26 15:53:19.601605] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27761: invalid cntlid range [0-65519] 00:10:40.151 15:53:19 -- target/invalid.sh@73 -- # out='request: 00:10:40.151 { 00:10:40.151 "nqn": "nqn.2016-06.io.spdk:cnode27761", 00:10:40.151 "min_cntlid": 0, 00:10:40.151 "method": "nvmf_create_subsystem", 00:10:40.151 "req_id": 1 00:10:40.151 } 00:10:40.151 Got JSON-RPC error response 00:10:40.151 response: 00:10:40.151 { 00:10:40.151 "code": -32602, 00:10:40.151 "message": "Invalid cntlid range [0-65519]" 00:10:40.151 }' 00:10:40.151 15:53:19 -- target/invalid.sh@74 -- # [[ request: 00:10:40.151 { 00:10:40.151 "nqn": "nqn.2016-06.io.spdk:cnode27761", 00:10:40.151 "min_cntlid": 0, 00:10:40.151 "method": "nvmf_create_subsystem", 00:10:40.151 "req_id": 1 00:10:40.151 } 00:10:40.151 Got JSON-RPC error response 00:10:40.151 response: 00:10:40.151 { 00:10:40.151 "code": -32602, 00:10:40.151 "message": "Invalid cntlid range [0-65519]" 00:10:40.151 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:40.151 15:53:19 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8543 -i 65520 00:10:40.151 [2024-04-26 15:53:19.782256] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8543: invalid cntlid range [65520-65519] 00:10:40.151 15:53:19 -- target/invalid.sh@75 -- # out='request: 00:10:40.151 { 00:10:40.151 "nqn": "nqn.2016-06.io.spdk:cnode8543", 00:10:40.151 "min_cntlid": 65520, 00:10:40.151 "method": "nvmf_create_subsystem", 00:10:40.151 "req_id": 1 00:10:40.151 } 00:10:40.151 Got JSON-RPC error response 00:10:40.151 response: 00:10:40.151 { 00:10:40.151 "code": -32602, 00:10:40.151 "message": "Invalid cntlid range [65520-65519]" 00:10:40.151 }' 00:10:40.151 15:53:19 -- target/invalid.sh@76 -- # [[ request: 00:10:40.151 { 00:10:40.151 "nqn": "nqn.2016-06.io.spdk:cnode8543", 00:10:40.151 "min_cntlid": 65520, 00:10:40.151 "method": "nvmf_create_subsystem", 00:10:40.151 "req_id": 1 00:10:40.151 } 00:10:40.151 Got JSON-RPC error response 00:10:40.151 response: 00:10:40.151 { 00:10:40.151 "code": -32602, 00:10:40.151 "message": "Invalid cntlid range [65520-65519]" 00:10:40.151 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:40.151 15:53:19 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7591 -I 0 00:10:40.411 [2024-04-26 15:53:19.966904] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7591: invalid cntlid range [1-0] 00:10:40.411 15:53:19 -- target/invalid.sh@77 -- # out='request: 00:10:40.411 { 00:10:40.411 "nqn": "nqn.2016-06.io.spdk:cnode7591", 00:10:40.411 "max_cntlid": 0, 00:10:40.411 "method": "nvmf_create_subsystem", 00:10:40.411 "req_id": 1 00:10:40.411 } 00:10:40.411 Got JSON-RPC error response 00:10:40.411 response: 00:10:40.411 { 00:10:40.411 "code": -32602, 00:10:40.411 "message": "Invalid cntlid range [1-0]" 00:10:40.411 }' 00:10:40.411 15:53:19 -- target/invalid.sh@78 -- # [[ request: 00:10:40.411 { 00:10:40.411 "nqn": "nqn.2016-06.io.spdk:cnode7591", 00:10:40.411 "max_cntlid": 0, 00:10:40.411 "method": "nvmf_create_subsystem", 00:10:40.411 "req_id": 1 00:10:40.411 } 00:10:40.411 Got JSON-RPC error response 00:10:40.411 response: 00:10:40.411 { 00:10:40.411 "code": -32602, 00:10:40.411 "message": "Invalid cntlid range [1-0]" 00:10:40.411 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:40.411 15:53:19 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15002 -I 65520 00:10:40.671 [2024-04-26 15:53:20.163637] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15002: invalid cntlid range [1-65520] 00:10:40.671 15:53:20 -- target/invalid.sh@79 -- # out='request: 00:10:40.671 { 00:10:40.671 "nqn": "nqn.2016-06.io.spdk:cnode15002", 00:10:40.671 "max_cntlid": 65520, 00:10:40.671 "method": "nvmf_create_subsystem", 00:10:40.671 "req_id": 1 00:10:40.671 } 00:10:40.671 Got JSON-RPC error response 00:10:40.671 response: 00:10:40.671 { 00:10:40.671 "code": -32602, 00:10:40.671 "message": "Invalid cntlid range [1-65520]" 00:10:40.671 }' 00:10:40.671 15:53:20 -- target/invalid.sh@80 -- # [[ request: 00:10:40.671 { 00:10:40.671 "nqn": "nqn.2016-06.io.spdk:cnode15002", 00:10:40.671 "max_cntlid": 65520, 00:10:40.671 "method": "nvmf_create_subsystem", 00:10:40.671 "req_id": 1 00:10:40.671 } 00:10:40.671 Got JSON-RPC error response 00:10:40.671 response: 00:10:40.671 { 00:10:40.671 "code": -32602, 00:10:40.671 "message": "Invalid cntlid range [1-65520]" 00:10:40.671 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:40.671 15:53:20 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12834 -i 6 -I 5 00:10:40.931 [2024-04-26 15:53:20.360318] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12834: invalid cntlid range [6-5] 00:10:40.931 15:53:20 -- target/invalid.sh@83 -- # out='request: 00:10:40.931 { 00:10:40.931 "nqn": "nqn.2016-06.io.spdk:cnode12834", 00:10:40.931 "min_cntlid": 6, 00:10:40.931 "max_cntlid": 5, 00:10:40.931 "method": "nvmf_create_subsystem", 00:10:40.931 "req_id": 1 00:10:40.931 } 00:10:40.931 Got JSON-RPC error response 00:10:40.931 response: 00:10:40.931 { 00:10:40.931 "code": -32602, 00:10:40.931 "message": "Invalid cntlid range [6-5]" 00:10:40.931 }' 00:10:40.931 15:53:20 -- target/invalid.sh@84 -- # [[ request: 00:10:40.931 { 00:10:40.931 "nqn": "nqn.2016-06.io.spdk:cnode12834", 00:10:40.931 "min_cntlid": 6, 00:10:40.931 "max_cntlid": 5, 00:10:40.931 "method": "nvmf_create_subsystem", 00:10:40.931 "req_id": 1 00:10:40.931 } 00:10:40.931 Got JSON-RPC error response 00:10:40.931 response: 00:10:40.931 { 00:10:40.931 "code": -32602, 00:10:40.931 "message": "Invalid cntlid range [6-5]" 00:10:40.931 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:40.931 15:53:20 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:40.931 15:53:20 -- target/invalid.sh@87 -- # out='request: 00:10:40.931 { 00:10:40.931 "name": "foobar", 00:10:40.931 "method": "nvmf_delete_target", 00:10:40.931 "req_id": 1 00:10:40.931 } 00:10:40.931 Got JSON-RPC error response 00:10:40.931 response: 00:10:40.931 { 00:10:40.931 "code": -32602, 00:10:40.931 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:40.931 }' 00:10:40.931 15:53:20 -- target/invalid.sh@88 -- # [[ request: 00:10:40.931 { 00:10:40.931 "name": "foobar", 00:10:40.931 "method": "nvmf_delete_target", 00:10:40.931 "req_id": 1 00:10:40.931 } 00:10:40.931 Got JSON-RPC error response 00:10:40.931 response: 00:10:40.931 { 00:10:40.931 "code": -32602, 00:10:40.931 "message": "The specified target doesn't exist, cannot delete it." 00:10:40.931 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:40.931 15:53:20 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:40.931 15:53:20 -- target/invalid.sh@91 -- # nvmftestfini 00:10:40.932 15:53:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:40.932 15:53:20 -- nvmf/common.sh@117 -- # sync 00:10:40.932 15:53:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.932 15:53:20 -- nvmf/common.sh@120 -- # set +e 00:10:40.932 15:53:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.932 15:53:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.932 rmmod nvme_tcp 00:10:40.932 rmmod nvme_fabrics 00:10:40.932 rmmod nvme_keyring 00:10:40.932 15:53:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.932 15:53:20 -- nvmf/common.sh@124 -- # set -e 00:10:40.932 15:53:20 -- nvmf/common.sh@125 -- # return 0 00:10:40.932 15:53:20 -- nvmf/common.sh@478 -- # '[' -n 2342062 ']' 00:10:40.932 15:53:20 -- nvmf/common.sh@479 -- # killprocess 2342062 00:10:40.932 15:53:20 -- common/autotest_common.sh@936 -- # '[' -z 2342062 ']' 00:10:40.932 15:53:20 -- common/autotest_common.sh@940 -- # kill -0 2342062 00:10:40.932 15:53:20 -- common/autotest_common.sh@941 -- # uname 00:10:40.932 15:53:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:40.932 15:53:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2342062 00:10:40.932 15:53:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:40.932 15:53:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:40.932 15:53:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2342062' 00:10:40.932 killing process with pid 2342062 00:10:40.932 15:53:20 -- common/autotest_common.sh@955 -- # kill 2342062 00:10:40.932 15:53:20 -- common/autotest_common.sh@960 -- # wait 2342062 00:10:42.314 15:53:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:42.314 15:53:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:42.314 15:53:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:42.314 15:53:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.314 15:53:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:42.314 15:53:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.314 15:53:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.314 15:53:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.854 15:53:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:44.854 00:10:44.854 real 0m12.923s 00:10:44.854 user 0m21.794s 00:10:44.854 sys 0m5.277s 00:10:44.854 15:53:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:44.854 15:53:23 -- common/autotest_common.sh@10 -- # set +x 00:10:44.854 ************************************ 00:10:44.854 END TEST nvmf_invalid 00:10:44.854 ************************************ 00:10:44.854 15:53:23 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:44.854 15:53:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:44.854 15:53:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.854 15:53:23 -- common/autotest_common.sh@10 -- # set +x 00:10:44.854 ************************************ 00:10:44.854 START TEST nvmf_abort 00:10:44.854 ************************************ 00:10:44.854 15:53:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:44.854 * Looking for test storage... 00:10:44.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.854 15:53:24 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.854 15:53:24 -- nvmf/common.sh@7 -- # uname -s 00:10:44.854 15:53:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.854 15:53:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.854 15:53:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.854 15:53:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.854 15:53:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.854 15:53:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.854 15:53:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.854 15:53:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.854 15:53:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.854 15:53:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.854 15:53:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:44.854 15:53:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:44.854 15:53:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.854 15:53:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.854 15:53:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.854 15:53:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.854 15:53:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.854 15:53:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.854 15:53:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.854 15:53:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.854 15:53:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.854 15:53:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.854 15:53:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.854 15:53:24 -- paths/export.sh@5 -- # export PATH 00:10:44.854 15:53:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.854 15:53:24 -- nvmf/common.sh@47 -- # : 0 00:10:44.854 15:53:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.854 15:53:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.854 15:53:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.854 15:53:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.854 15:53:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.854 15:53:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.854 15:53:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.854 15:53:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.854 15:53:24 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.854 15:53:24 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:44.854 15:53:24 -- target/abort.sh@14 -- # nvmftestinit 00:10:44.854 15:53:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:44.854 15:53:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.854 15:53:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:44.854 15:53:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:44.854 15:53:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:44.854 15:53:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.854 15:53:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.854 15:53:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.854 15:53:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:44.854 15:53:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:44.854 15:53:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.854 15:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:50.140 15:53:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:50.140 15:53:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:50.140 15:53:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:50.140 15:53:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:50.140 15:53:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:50.140 15:53:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:50.140 15:53:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:50.140 15:53:29 -- nvmf/common.sh@295 -- # net_devs=() 00:10:50.140 15:53:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:50.140 15:53:29 -- nvmf/common.sh@296 -- # e810=() 00:10:50.140 15:53:29 -- nvmf/common.sh@296 -- # local -ga e810 00:10:50.140 15:53:29 -- nvmf/common.sh@297 -- # x722=() 00:10:50.140 15:53:29 -- nvmf/common.sh@297 -- # local -ga x722 00:10:50.140 15:53:29 -- nvmf/common.sh@298 -- # mlx=() 00:10:50.140 15:53:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:50.140 15:53:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.140 15:53:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:50.140 15:53:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:50.140 15:53:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:50.140 15:53:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.140 15:53:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:50.140 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:50.140 15:53:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:50.140 15:53:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:50.140 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:50.140 15:53:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:50.140 15:53:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.140 15:53:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.140 15:53:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:50.140 15:53:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.140 15:53:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:50.140 Found net devices under 0000:86:00.0: cvl_0_0 00:10:50.140 15:53:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.140 15:53:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:50.140 15:53:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.140 15:53:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:50.140 15:53:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.140 15:53:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:50.140 Found net devices under 0000:86:00.1: cvl_0_1 00:10:50.140 15:53:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.140 15:53:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:50.140 15:53:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:50.140 15:53:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:50.140 15:53:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.140 15:53:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.140 15:53:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.140 15:53:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:50.140 15:53:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.140 15:53:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.140 15:53:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:50.140 15:53:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.140 15:53:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.140 15:53:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:50.140 15:53:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:50.140 15:53:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.140 15:53:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.140 15:53:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.140 15:53:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.140 15:53:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:50.140 15:53:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.140 15:53:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.140 15:53:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.140 15:53:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:50.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:10:50.140 00:10:50.140 --- 10.0.0.2 ping statistics --- 00:10:50.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.140 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:50.140 15:53:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:10:50.140 00:10:50.140 --- 10.0.0.1 ping statistics --- 00:10:50.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.140 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:10:50.140 15:53:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.140 15:53:29 -- nvmf/common.sh@411 -- # return 0 00:10:50.140 15:53:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:50.140 15:53:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.140 15:53:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:50.140 15:53:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.140 15:53:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:50.140 15:53:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:50.140 15:53:29 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:50.140 15:53:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:50.140 15:53:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:50.140 15:53:29 -- common/autotest_common.sh@10 -- # set +x 00:10:50.140 15:53:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:50.140 15:53:29 -- nvmf/common.sh@470 -- # nvmfpid=2346492 00:10:50.141 15:53:29 -- nvmf/common.sh@471 -- # waitforlisten 2346492 00:10:50.141 15:53:29 -- common/autotest_common.sh@817 -- # '[' -z 2346492 ']' 00:10:50.141 15:53:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.141 15:53:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:50.141 15:53:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.141 15:53:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:50.141 15:53:29 -- common/autotest_common.sh@10 -- # set +x 00:10:50.141 [2024-04-26 15:53:29.757390] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:10:50.141 [2024-04-26 15:53:29.757474] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.141 EAL: No free 2048 kB hugepages reported on node 1 00:10:50.400 [2024-04-26 15:53:29.866828] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:50.658 [2024-04-26 15:53:30.093367] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.658 [2024-04-26 15:53:30.093412] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.658 [2024-04-26 15:53:30.093425] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.658 [2024-04-26 15:53:30.093438] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.658 [2024-04-26 15:53:30.093452] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.658 [2024-04-26 15:53:30.093597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.658 [2024-04-26 15:53:30.093626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.658 [2024-04-26 15:53:30.093627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.917 15:53:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:50.917 15:53:30 -- common/autotest_common.sh@850 -- # return 0 00:10:50.917 15:53:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:50.917 15:53:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:50.917 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:50.917 15:53:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.917 15:53:30 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:50.917 15:53:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.917 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:50.917 [2024-04-26 15:53:30.582938] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.917 15:53:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.917 15:53:30 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:51.176 15:53:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.176 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.176 Malloc0 00:10:51.176 15:53:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.176 15:53:30 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:51.176 15:53:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.176 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.176 Delay0 00:10:51.176 15:53:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.176 15:53:30 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:51.176 15:53:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.176 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.176 15:53:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.176 15:53:30 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:51.176 15:53:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.176 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.176 15:53:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.176 15:53:30 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:51.176 15:53:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.176 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.176 [2024-04-26 15:53:30.727243] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.176 15:53:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.176 15:53:30 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:51.176 15:53:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.176 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.176 15:53:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.176 15:53:30 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:51.176 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.176 [2024-04-26 15:53:30.836068] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:53.836 Initializing NVMe Controllers 00:10:53.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:53.836 controller IO queue size 128 less than required 00:10:53.836 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:53.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:53.836 Initialization complete. Launching workers. 00:10:53.836 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 38139 00:10:53.836 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38198, failed to submit 66 00:10:53.836 success 38139, unsuccess 59, failed 0 00:10:53.836 15:53:32 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:53.836 15:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.836 15:53:32 -- common/autotest_common.sh@10 -- # set +x 00:10:53.836 15:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.836 15:53:32 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:53.836 15:53:32 -- target/abort.sh@38 -- # nvmftestfini 00:10:53.836 15:53:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:53.836 15:53:32 -- nvmf/common.sh@117 -- # sync 00:10:53.836 15:53:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.836 15:53:32 -- nvmf/common.sh@120 -- # set +e 00:10:53.836 15:53:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.836 15:53:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.836 rmmod nvme_tcp 00:10:53.836 rmmod nvme_fabrics 00:10:53.836 rmmod nvme_keyring 00:10:53.836 15:53:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.836 15:53:33 -- nvmf/common.sh@124 -- # set -e 00:10:53.836 15:53:33 -- nvmf/common.sh@125 -- # return 0 00:10:53.836 15:53:33 -- nvmf/common.sh@478 -- # '[' -n 2346492 ']' 00:10:53.836 15:53:33 -- nvmf/common.sh@479 -- # killprocess 2346492 00:10:53.836 15:53:33 -- common/autotest_common.sh@936 -- # '[' -z 2346492 ']' 00:10:53.836 15:53:33 -- common/autotest_common.sh@940 -- # kill -0 2346492 00:10:53.836 15:53:33 -- common/autotest_common.sh@941 -- # uname 00:10:53.836 15:53:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.836 15:53:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2346492 00:10:53.836 15:53:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:53.836 15:53:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:53.836 15:53:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2346492' 00:10:53.836 killing process with pid 2346492 00:10:53.836 15:53:33 -- common/autotest_common.sh@955 -- # kill 2346492 00:10:53.836 15:53:33 -- common/autotest_common.sh@960 -- # wait 2346492 00:10:54.810 15:53:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:54.810 15:53:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:54.810 15:53:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:54.810 15:53:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.810 15:53:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.810 15:53:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.069 15:53:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.069 15:53:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.977 15:53:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:56.977 00:10:56.977 real 0m12.449s 00:10:56.977 user 0m15.504s 00:10:56.977 sys 0m5.212s 00:10:56.977 15:53:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:56.977 15:53:36 -- common/autotest_common.sh@10 -- # set +x 00:10:56.977 ************************************ 00:10:56.977 END TEST nvmf_abort 00:10:56.977 ************************************ 00:10:56.977 15:53:36 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:56.977 15:53:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:56.977 15:53:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.977 15:53:36 -- common/autotest_common.sh@10 -- # set +x 00:10:57.236 ************************************ 00:10:57.236 START TEST nvmf_ns_hotplug_stress 00:10:57.236 ************************************ 00:10:57.236 15:53:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:57.236 * Looking for test storage... 00:10:57.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.236 15:53:36 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.236 15:53:36 -- nvmf/common.sh@7 -- # uname -s 00:10:57.236 15:53:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.236 15:53:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.236 15:53:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.236 15:53:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.236 15:53:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.236 15:53:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.236 15:53:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.236 15:53:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.236 15:53:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.236 15:53:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.236 15:53:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.236 15:53:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.236 15:53:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.236 15:53:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.236 15:53:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.236 15:53:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.236 15:53:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.236 15:53:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.236 15:53:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.236 15:53:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.236 15:53:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.236 15:53:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.236 15:53:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.236 15:53:36 -- paths/export.sh@5 -- # export PATH 00:10:57.236 15:53:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.236 15:53:36 -- nvmf/common.sh@47 -- # : 0 00:10:57.236 15:53:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.236 15:53:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.236 15:53:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.236 15:53:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.236 15:53:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.236 15:53:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.236 15:53:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.236 15:53:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.236 15:53:36 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.236 15:53:36 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:10:57.236 15:53:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:57.236 15:53:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.236 15:53:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:57.236 15:53:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:57.236 15:53:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:57.236 15:53:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.236 15:53:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.236 15:53:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.236 15:53:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:57.236 15:53:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:57.236 15:53:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.236 15:53:36 -- common/autotest_common.sh@10 -- # set +x 00:11:02.525 15:53:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:02.525 15:53:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.525 15:53:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.525 15:53:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.525 15:53:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.525 15:53:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.525 15:53:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.525 15:53:42 -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.525 15:53:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.525 15:53:42 -- nvmf/common.sh@296 -- # e810=() 00:11:02.525 15:53:42 -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.525 15:53:42 -- nvmf/common.sh@297 -- # x722=() 00:11:02.525 15:53:42 -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.525 15:53:42 -- nvmf/common.sh@298 -- # mlx=() 00:11:02.525 15:53:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.525 15:53:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.525 15:53:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.525 15:53:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.525 15:53:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.525 15:53:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.525 15:53:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.525 15:53:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.525 15:53:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.525 15:53:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:02.525 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:02.525 15:53:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.525 15:53:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.525 15:53:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.525 15:53:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.525 15:53:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.526 15:53:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:02.526 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:02.526 15:53:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.526 15:53:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.526 15:53:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.526 15:53:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:02.526 15:53:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.526 15:53:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:02.526 Found net devices under 0000:86:00.0: cvl_0_0 00:11:02.526 15:53:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.526 15:53:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.526 15:53:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.526 15:53:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:02.526 15:53:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.526 15:53:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:02.526 Found net devices under 0000:86:00.1: cvl_0_1 00:11:02.526 15:53:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.526 15:53:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:02.526 15:53:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:02.526 15:53:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:02.526 15:53:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:02.526 15:53:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.526 15:53:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.526 15:53:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.526 15:53:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.526 15:53:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.526 15:53:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.526 15:53:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.526 15:53:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.526 15:53:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.526 15:53:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.526 15:53:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.526 15:53:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.526 15:53:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.526 15:53:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.526 15:53:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.785 15:53:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.785 15:53:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.785 15:53:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.785 15:53:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.785 15:53:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:11:02.785 00:11:02.785 --- 10.0.0.2 ping statistics --- 00:11:02.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.785 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:02.785 15:53:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:11:02.785 00:11:02.785 --- 10.0.0.1 ping statistics --- 00:11:02.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.785 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:11:02.785 15:53:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.785 15:53:42 -- nvmf/common.sh@411 -- # return 0 00:11:02.785 15:53:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:02.785 15:53:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.785 15:53:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:02.785 15:53:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:02.785 15:53:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.785 15:53:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:02.785 15:53:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:02.785 15:53:42 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:11:02.785 15:53:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:02.785 15:53:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:02.785 15:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:02.785 15:53:42 -- nvmf/common.sh@470 -- # nvmfpid=2350863 00:11:02.785 15:53:42 -- nvmf/common.sh@471 -- # waitforlisten 2350863 00:11:02.785 15:53:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:02.785 15:53:42 -- common/autotest_common.sh@817 -- # '[' -z 2350863 ']' 00:11:02.785 15:53:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.785 15:53:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:02.785 15:53:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.785 15:53:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:02.785 15:53:42 -- common/autotest_common.sh@10 -- # set +x 00:11:02.785 [2024-04-26 15:53:42.462463] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:02.785 [2024-04-26 15:53:42.462550] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.044 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.044 [2024-04-26 15:53:42.570867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.304 [2024-04-26 15:53:42.795568] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.304 [2024-04-26 15:53:42.795611] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.304 [2024-04-26 15:53:42.795625] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.304 [2024-04-26 15:53:42.795637] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.304 [2024-04-26 15:53:42.795651] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.304 [2024-04-26 15:53:42.795783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.304 [2024-04-26 15:53:42.795803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.304 [2024-04-26 15:53:42.795804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.562 15:53:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:03.562 15:53:43 -- common/autotest_common.sh@850 -- # return 0 00:11:03.562 15:53:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:03.562 15:53:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:03.562 15:53:43 -- common/autotest_common.sh@10 -- # set +x 00:11:03.819 15:53:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.819 15:53:43 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:11:03.819 15:53:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:03.819 [2024-04-26 15:53:43.427713] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.819 15:53:43 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:04.078 15:53:43 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.337 [2024-04-26 15:53:43.819509] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.337 15:53:43 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:04.597 15:53:44 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:04.597 Malloc0 00:11:04.597 15:53:44 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:04.855 Delay0 00:11:04.855 15:53:44 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.114 15:53:44 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:05.114 NULL1 00:11:05.114 15:53:44 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:05.374 15:53:44 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2351245 00:11:05.374 15:53:44 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:05.374 15:53:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:05.374 15:53:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.374 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.633 15:53:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.893 15:53:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:11:05.893 15:53:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:05.893 true 00:11:05.893 15:53:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:05.893 15:53:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.153 15:53:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.413 15:53:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:11:06.413 15:53:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:06.413 true 00:11:06.413 15:53:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:06.413 15:53:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.672 15:53:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.932 15:53:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:11:06.932 15:53:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:07.192 true 00:11:07.192 15:53:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:07.192 15:53:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.192 15:53:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.452 15:53:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:11:07.452 15:53:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:07.711 true 00:11:07.711 15:53:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:07.711 15:53:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.711 15:53:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.971 15:53:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:11:07.971 15:53:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:08.231 true 00:11:08.231 15:53:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:08.231 15:53:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.490 15:53:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.491 15:53:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:11:08.491 15:53:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:08.751 true 00:11:08.751 15:53:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:08.751 15:53:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.010 15:53:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.270 15:53:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:11:09.270 15:53:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:09.270 true 00:11:09.270 15:53:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:09.270 15:53:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.530 15:53:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.790 15:53:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:11:09.790 15:53:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:09.790 true 00:11:10.050 15:53:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:10.050 15:53:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.050 15:53:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.309 15:53:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:11:10.309 15:53:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:10.568 true 00:11:10.568 15:53:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:10.568 15:53:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.568 15:53:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.828 15:53:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:11:10.828 15:53:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:11.087 true 00:11:11.087 15:53:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:11.087 15:53:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.347 15:53:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.347 15:53:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:11:11.347 15:53:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:11.606 true 00:11:11.606 15:53:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:11.606 15:53:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.865 15:53:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.124 15:53:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:11:12.124 15:53:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:12.124 true 00:11:12.124 15:53:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:12.124 15:53:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.382 15:53:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.642 15:53:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:11:12.642 15:53:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:12.642 true 00:11:12.902 15:53:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:12.902 15:53:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.902 15:53:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.163 15:53:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:11:13.163 15:53:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:13.424 true 00:11:13.424 15:53:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:13.424 15:53:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.424 15:53:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.683 15:53:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:11:13.683 15:53:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:13.943 true 00:11:13.944 15:53:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:13.944 15:53:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.204 15:53:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.204 15:53:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:11:14.204 15:53:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:14.463 true 00:11:14.463 15:53:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:14.463 15:53:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.723 15:53:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.723 15:53:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:11:14.723 15:53:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:14.982 true 00:11:14.982 15:53:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:14.982 15:53:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.242 15:53:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.500 15:53:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:11:15.500 15:53:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:15.500 true 00:11:15.500 15:53:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:15.500 15:53:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.759 15:53:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.019 15:53:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:11:16.019 15:53:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:16.278 true 00:11:16.278 15:53:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:16.278 15:53:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.278 15:53:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.538 15:53:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:11:16.538 15:53:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:16.797 true 00:11:16.797 15:53:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:16.797 15:53:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.057 15:53:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.057 15:53:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:11:17.057 15:53:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:17.317 true 00:11:17.317 15:53:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:17.317 15:53:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.577 15:53:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.837 15:53:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:11:17.837 15:53:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:17.837 true 00:11:18.097 15:53:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:18.097 15:53:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.097 15:53:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.357 15:53:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:11:18.357 15:53:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:18.616 true 00:11:18.616 15:53:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:18.616 15:53:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.877 15:53:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.877 15:53:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:11:18.877 15:53:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:19.136 true 00:11:19.137 15:53:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:19.137 15:53:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.396 15:53:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.656 15:53:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:11:19.656 15:53:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:19.656 true 00:11:19.656 15:53:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:19.656 15:53:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.916 15:53:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.176 15:53:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:11:20.176 15:53:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:20.176 true 00:11:20.176 15:53:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:20.176 15:53:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.436 15:54:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.695 15:54:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:11:20.695 15:54:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:20.955 true 00:11:20.955 15:54:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:20.955 15:54:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.214 15:54:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.215 15:54:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:11:21.215 15:54:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:21.473 true 00:11:21.473 15:54:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:21.473 15:54:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.731 15:54:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.731 15:54:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:11:21.731 15:54:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:21.990 true 00:11:21.990 15:54:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:21.990 15:54:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.248 15:54:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.507 15:54:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:11:22.507 15:54:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:22.507 true 00:11:22.507 15:54:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:22.508 15:54:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.823 15:54:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.081 15:54:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:11:23.081 15:54:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:23.081 true 00:11:23.340 15:54:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:23.340 15:54:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.340 15:54:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.599 15:54:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:11:23.599 15:54:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:23.858 true 00:11:23.858 15:54:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:23.858 15:54:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.116 15:54:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.116 15:54:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:11:24.116 15:54:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:24.376 true 00:11:24.376 15:54:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:24.376 15:54:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.637 15:54:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.896 15:54:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:11:24.896 15:54:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:24.896 true 00:11:24.896 15:54:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:24.896 15:54:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.155 15:54:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.414 15:54:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:11:25.414 15:54:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:25.674 true 00:11:25.674 15:54:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:25.674 15:54:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.674 15:54:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.932 15:54:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:11:25.932 15:54:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:26.191 true 00:11:26.191 15:54:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:26.191 15:54:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.451 15:54:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.451 15:54:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:11:26.451 15:54:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:26.710 true 00:11:26.710 15:54:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:26.710 15:54:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.969 15:54:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.228 15:54:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:11:27.228 15:54:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:27.228 true 00:11:27.228 15:54:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:27.228 15:54:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.488 15:54:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.747 15:54:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:11:27.747 15:54:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:28.006 true 00:11:28.006 15:54:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:28.006 15:54:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.006 15:54:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.266 15:54:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:11:28.266 15:54:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:28.525 true 00:11:28.525 15:54:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:28.525 15:54:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.785 15:54:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.785 15:54:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:11:28.785 15:54:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:29.044 true 00:11:29.044 15:54:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:29.044 15:54:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.303 15:54:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.563 15:54:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:11:29.563 15:54:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:29.563 true 00:11:29.822 15:54:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:29.822 15:54:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.822 15:54:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.082 15:54:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:11:30.082 15:54:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:30.342 true 00:11:30.342 15:54:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:30.342 15:54:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.601 15:54:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.601 15:54:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:11:30.601 15:54:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:30.860 true 00:11:30.860 15:54:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:30.861 15:54:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.181 15:54:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.181 15:54:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:11:31.181 15:54:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:31.459 true 00:11:31.459 15:54:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:31.459 15:54:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.727 15:54:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.727 15:54:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:11:31.727 15:54:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:31.986 true 00:11:31.986 15:54:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:31.986 15:54:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.246 15:54:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.506 15:54:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:11:32.506 15:54:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:32.506 true 00:11:32.506 15:54:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:32.506 15:54:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.767 15:54:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.026 15:54:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:11:33.026 15:54:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:33.285 true 00:11:33.285 15:54:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:33.285 15:54:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.543 15:54:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.543 15:54:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:11:33.543 15:54:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:33.802 true 00:11:33.802 15:54:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:33.802 15:54:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.062 15:54:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.322 15:54:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:11:34.322 15:54:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:34.322 true 00:11:34.322 15:54:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:34.322 15:54:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.581 15:54:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.841 15:54:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:11:34.841 15:54:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:34.841 true 00:11:35.101 15:54:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:35.101 15:54:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.101 15:54:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.361 15:54:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:11:35.361 15:54:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:11:35.621 true 00:11:35.621 15:54:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:35.621 15:54:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.888 15:54:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.888 Initializing NVMe Controllers 00:11:35.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:35.888 Controller IO queue size 128, less than required. 00:11:35.888 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:35.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:35.888 Initialization complete. Launching workers. 00:11:35.888 ======================================================== 00:11:35.888 Latency(us) 00:11:35.888 Device Information : IOPS MiB/s Average min max 00:11:35.888 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 22441.31 10.96 5703.84 2586.94 11533.34 00:11:35.888 ======================================================== 00:11:35.888 Total : 22441.31 10.96 5703.84 2586.94 11533.34 00:11:35.888 00:11:35.888 15:54:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:11:35.888 15:54:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:11:36.147 true 00:11:36.147 15:54:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2351245 00:11:36.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2351245) - No such process 00:11:36.147 15:54:15 -- target/ns_hotplug_stress.sh@44 -- # wait 2351245 00:11:36.147 15:54:15 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:36.147 15:54:15 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:11:36.147 15:54:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:36.147 15:54:15 -- nvmf/common.sh@117 -- # sync 00:11:36.147 15:54:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.147 15:54:15 -- nvmf/common.sh@120 -- # set +e 00:11:36.147 15:54:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.147 15:54:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.147 rmmod nvme_tcp 00:11:36.147 rmmod nvme_fabrics 00:11:36.147 rmmod nvme_keyring 00:11:36.147 15:54:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.147 15:54:15 -- nvmf/common.sh@124 -- # set -e 00:11:36.147 15:54:15 -- nvmf/common.sh@125 -- # return 0 00:11:36.147 15:54:15 -- nvmf/common.sh@478 -- # '[' -n 2350863 ']' 00:11:36.147 15:54:15 -- nvmf/common.sh@479 -- # killprocess 2350863 00:11:36.147 15:54:15 -- common/autotest_common.sh@936 -- # '[' -z 2350863 ']' 00:11:36.147 15:54:15 -- common/autotest_common.sh@940 -- # kill -0 2350863 00:11:36.147 15:54:15 -- common/autotest_common.sh@941 -- # uname 00:11:36.147 15:54:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:36.147 15:54:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2350863 00:11:36.147 15:54:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:36.147 15:54:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:36.148 15:54:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2350863' 00:11:36.148 killing process with pid 2350863 00:11:36.148 15:54:15 -- common/autotest_common.sh@955 -- # kill 2350863 00:11:36.148 15:54:15 -- common/autotest_common.sh@960 -- # wait 2350863 00:11:37.529 15:54:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:37.529 15:54:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:37.529 15:54:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:37.529 15:54:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.529 15:54:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:37.529 15:54:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.529 15:54:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.529 15:54:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.069 15:54:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:40.069 00:11:40.069 real 0m42.518s 00:11:40.069 user 2m37.799s 00:11:40.069 sys 0m12.853s 00:11:40.069 15:54:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:40.069 15:54:19 -- common/autotest_common.sh@10 -- # set +x 00:11:40.069 ************************************ 00:11:40.069 END TEST nvmf_ns_hotplug_stress 00:11:40.069 ************************************ 00:11:40.069 15:54:19 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:40.069 15:54:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:40.069 15:54:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:40.069 15:54:19 -- common/autotest_common.sh@10 -- # set +x 00:11:40.069 ************************************ 00:11:40.069 START TEST nvmf_connect_stress 00:11:40.069 ************************************ 00:11:40.069 15:54:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:40.069 * Looking for test storage... 00:11:40.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.069 15:54:19 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.069 15:54:19 -- nvmf/common.sh@7 -- # uname -s 00:11:40.069 15:54:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.069 15:54:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.069 15:54:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.069 15:54:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.069 15:54:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.069 15:54:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.069 15:54:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.069 15:54:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.069 15:54:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.069 15:54:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.069 15:54:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:40.069 15:54:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:40.069 15:54:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.069 15:54:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.069 15:54:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.069 15:54:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.069 15:54:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.069 15:54:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.069 15:54:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.069 15:54:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.069 15:54:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.069 15:54:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.070 15:54:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.070 15:54:19 -- paths/export.sh@5 -- # export PATH 00:11:40.070 15:54:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.070 15:54:19 -- nvmf/common.sh@47 -- # : 0 00:11:40.070 15:54:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.070 15:54:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.070 15:54:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.070 15:54:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.070 15:54:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.070 15:54:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.070 15:54:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.070 15:54:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.070 15:54:19 -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:40.070 15:54:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:40.070 15:54:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.070 15:54:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:40.070 15:54:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:40.070 15:54:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:40.070 15:54:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.070 15:54:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.070 15:54:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.070 15:54:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:40.070 15:54:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:40.070 15:54:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:40.070 15:54:19 -- common/autotest_common.sh@10 -- # set +x 00:11:45.351 15:54:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:45.351 15:54:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:45.351 15:54:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:45.351 15:54:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:45.351 15:54:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:45.351 15:54:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:45.351 15:54:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:45.351 15:54:24 -- nvmf/common.sh@295 -- # net_devs=() 00:11:45.351 15:54:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:45.351 15:54:24 -- nvmf/common.sh@296 -- # e810=() 00:11:45.351 15:54:24 -- nvmf/common.sh@296 -- # local -ga e810 00:11:45.351 15:54:24 -- nvmf/common.sh@297 -- # x722=() 00:11:45.351 15:54:24 -- nvmf/common.sh@297 -- # local -ga x722 00:11:45.351 15:54:24 -- nvmf/common.sh@298 -- # mlx=() 00:11:45.351 15:54:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:45.351 15:54:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.351 15:54:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:45.351 15:54:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:45.351 15:54:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:45.351 15:54:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.351 15:54:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:45.351 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:45.351 15:54:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.351 15:54:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:45.351 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:45.351 15:54:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:45.351 15:54:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.351 15:54:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.351 15:54:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:45.351 15:54:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.351 15:54:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:45.351 Found net devices under 0000:86:00.0: cvl_0_0 00:11:45.351 15:54:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.351 15:54:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.351 15:54:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.351 15:54:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:45.351 15:54:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.351 15:54:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:45.351 Found net devices under 0000:86:00.1: cvl_0_1 00:11:45.351 15:54:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.351 15:54:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:45.351 15:54:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:45.351 15:54:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:45.351 15:54:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.351 15:54:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.351 15:54:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.351 15:54:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:45.351 15:54:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.351 15:54:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.351 15:54:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:45.351 15:54:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.351 15:54:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.351 15:54:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:45.351 15:54:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:45.351 15:54:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.351 15:54:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.351 15:54:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.351 15:54:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.351 15:54:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:45.351 15:54:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.351 15:54:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.351 15:54:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.351 15:54:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:45.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:45.351 00:11:45.351 --- 10.0.0.2 ping statistics --- 00:11:45.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.351 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:45.351 15:54:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:11:45.351 00:11:45.351 --- 10.0.0.1 ping statistics --- 00:11:45.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.351 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:11:45.351 15:54:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.351 15:54:24 -- nvmf/common.sh@411 -- # return 0 00:11:45.351 15:54:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:45.351 15:54:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.351 15:54:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:45.351 15:54:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.351 15:54:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:45.351 15:54:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:45.351 15:54:24 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:45.351 15:54:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:45.351 15:54:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:45.351 15:54:24 -- common/autotest_common.sh@10 -- # set +x 00:11:45.351 15:54:24 -- nvmf/common.sh@470 -- # nvmfpid=2360649 00:11:45.351 15:54:24 -- nvmf/common.sh@471 -- # waitforlisten 2360649 00:11:45.351 15:54:24 -- common/autotest_common.sh@817 -- # '[' -z 2360649 ']' 00:11:45.351 15:54:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.351 15:54:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:45.351 15:54:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.351 15:54:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:45.351 15:54:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:45.351 15:54:24 -- common/autotest_common.sh@10 -- # set +x 00:11:45.351 [2024-04-26 15:54:24.743283] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:45.351 [2024-04-26 15:54:24.743371] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.351 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.351 [2024-04-26 15:54:24.852564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:45.612 [2024-04-26 15:54:25.086431] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.612 [2024-04-26 15:54:25.086472] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.612 [2024-04-26 15:54:25.086485] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.612 [2024-04-26 15:54:25.086499] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.612 [2024-04-26 15:54:25.086512] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.612 [2024-04-26 15:54:25.086590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.612 [2024-04-26 15:54:25.086645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.612 [2024-04-26 15:54:25.086650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.872 15:54:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:45.872 15:54:25 -- common/autotest_common.sh@850 -- # return 0 00:11:45.872 15:54:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:45.872 15:54:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:45.872 15:54:25 -- common/autotest_common.sh@10 -- # set +x 00:11:45.872 15:54:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.872 15:54:25 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:45.872 15:54:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:45.872 15:54:25 -- common/autotest_common.sh@10 -- # set +x 00:11:45.872 [2024-04-26 15:54:25.549579] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.132 15:54:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.132 15:54:25 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.132 15:54:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.132 15:54:25 -- common/autotest_common.sh@10 -- # set +x 00:11:46.132 15:54:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.132 15:54:25 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.132 15:54:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.132 15:54:25 -- common/autotest_common.sh@10 -- # set +x 00:11:46.132 [2024-04-26 15:54:25.575179] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.132 15:54:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.132 15:54:25 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:46.132 15:54:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.132 15:54:25 -- common/autotest_common.sh@10 -- # set +x 00:11:46.132 NULL1 00:11:46.132 15:54:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.132 15:54:25 -- target/connect_stress.sh@21 -- # PERF_PID=2360898 00:11:46.132 15:54:25 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:46.132 15:54:25 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:46.132 15:54:25 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:46.132 15:54:25 -- target/connect_stress.sh@27 -- # seq 1 20 00:11:46.132 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.132 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:46.133 15:54:25 -- target/connect_stress.sh@28 -- # cat 00:11:46.133 15:54:25 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:46.133 15:54:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.133 15:54:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.133 15:54:25 -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 15:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.392 15:54:26 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:46.392 15:54:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.392 15:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.392 15:54:26 -- common/autotest_common.sh@10 -- # set +x 00:11:46.651 15:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.651 15:54:26 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:46.651 15:54:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.651 15:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.651 15:54:26 -- common/autotest_common.sh@10 -- # set +x 00:11:47.220 15:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.220 15:54:26 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:47.220 15:54:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.220 15:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.220 15:54:26 -- common/autotest_common.sh@10 -- # set +x 00:11:47.480 15:54:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.480 15:54:26 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:47.480 15:54:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.480 15:54:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.480 15:54:26 -- common/autotest_common.sh@10 -- # set +x 00:11:47.739 15:54:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.739 15:54:27 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:47.739 15:54:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.739 15:54:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.739 15:54:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.998 15:54:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.998 15:54:27 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:47.998 15:54:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.998 15:54:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.998 15:54:27 -- common/autotest_common.sh@10 -- # set +x 00:11:48.286 15:54:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.286 15:54:27 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:48.286 15:54:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.286 15:54:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.286 15:54:27 -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 15:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.854 15:54:28 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:48.855 15:54:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.855 15:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.855 15:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:49.114 15:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:49.114 15:54:28 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:49.114 15:54:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.114 15:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:49.114 15:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:49.373 15:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:49.373 15:54:28 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:49.373 15:54:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.373 15:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:49.373 15:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:49.632 15:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:49.632 15:54:29 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:49.632 15:54:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.632 15:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:49.632 15:54:29 -- common/autotest_common.sh@10 -- # set +x 00:11:49.891 15:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:49.891 15:54:29 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:49.891 15:54:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.891 15:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:49.891 15:54:29 -- common/autotest_common.sh@10 -- # set +x 00:11:50.460 15:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.460 15:54:29 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:50.460 15:54:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.460 15:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.460 15:54:29 -- common/autotest_common.sh@10 -- # set +x 00:11:50.719 15:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.719 15:54:30 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:50.719 15:54:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.719 15:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.719 15:54:30 -- common/autotest_common.sh@10 -- # set +x 00:11:50.979 15:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:50.979 15:54:30 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:50.979 15:54:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.979 15:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:50.979 15:54:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.238 15:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.238 15:54:30 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:51.238 15:54:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.238 15:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.238 15:54:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.497 15:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.497 15:54:31 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:51.497 15:54:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.497 15:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.497 15:54:31 -- common/autotest_common.sh@10 -- # set +x 00:11:52.066 15:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.066 15:54:31 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:52.066 15:54:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.066 15:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.066 15:54:31 -- common/autotest_common.sh@10 -- # set +x 00:11:52.326 15:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.326 15:54:31 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:52.326 15:54:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.326 15:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.326 15:54:31 -- common/autotest_common.sh@10 -- # set +x 00:11:52.585 15:54:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.585 15:54:32 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:52.585 15:54:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.585 15:54:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.585 15:54:32 -- common/autotest_common.sh@10 -- # set +x 00:11:52.845 15:54:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.845 15:54:32 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:52.845 15:54:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.845 15:54:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.845 15:54:32 -- common/autotest_common.sh@10 -- # set +x 00:11:53.410 15:54:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:53.410 15:54:32 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:53.410 15:54:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.410 15:54:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:53.410 15:54:32 -- common/autotest_common.sh@10 -- # set +x 00:11:53.667 15:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:53.667 15:54:33 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:53.667 15:54:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.667 15:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:53.667 15:54:33 -- common/autotest_common.sh@10 -- # set +x 00:11:53.925 15:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:53.925 15:54:33 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:53.925 15:54:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.925 15:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:53.925 15:54:33 -- common/autotest_common.sh@10 -- # set +x 00:11:54.184 15:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.184 15:54:33 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:54.184 15:54:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.184 15:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.184 15:54:33 -- common/autotest_common.sh@10 -- # set +x 00:11:54.443 15:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.443 15:54:34 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:54.443 15:54:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.443 15:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.443 15:54:34 -- common/autotest_common.sh@10 -- # set +x 00:11:55.011 15:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.011 15:54:34 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:55.011 15:54:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.011 15:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.011 15:54:34 -- common/autotest_common.sh@10 -- # set +x 00:11:55.269 15:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.269 15:54:34 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:55.269 15:54:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.269 15:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.269 15:54:34 -- common/autotest_common.sh@10 -- # set +x 00:11:55.527 15:54:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.527 15:54:35 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:55.527 15:54:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.527 15:54:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.527 15:54:35 -- common/autotest_common.sh@10 -- # set +x 00:11:55.786 15:54:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.786 15:54:35 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:55.786 15:54:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.786 15:54:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.786 15:54:35 -- common/autotest_common.sh@10 -- # set +x 00:11:56.044 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:56.044 15:54:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.044 15:54:35 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:56.044 15:54:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.044 15:54:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.044 15:54:35 -- common/autotest_common.sh@10 -- # set +x 00:11:56.613 15:54:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.613 15:54:36 -- target/connect_stress.sh@34 -- # kill -0 2360898 00:11:56.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2360898) - No such process 00:11:56.613 15:54:36 -- target/connect_stress.sh@38 -- # wait 2360898 00:11:56.613 15:54:36 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:56.613 15:54:36 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:56.613 15:54:36 -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:56.613 15:54:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:56.613 15:54:36 -- nvmf/common.sh@117 -- # sync 00:11:56.613 15:54:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.613 15:54:36 -- nvmf/common.sh@120 -- # set +e 00:11:56.613 15:54:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.613 15:54:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.613 rmmod nvme_tcp 00:11:56.613 rmmod nvme_fabrics 00:11:56.613 rmmod nvme_keyring 00:11:56.613 15:54:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.613 15:54:36 -- nvmf/common.sh@124 -- # set -e 00:11:56.613 15:54:36 -- nvmf/common.sh@125 -- # return 0 00:11:56.613 15:54:36 -- nvmf/common.sh@478 -- # '[' -n 2360649 ']' 00:11:56.613 15:54:36 -- nvmf/common.sh@479 -- # killprocess 2360649 00:11:56.613 15:54:36 -- common/autotest_common.sh@936 -- # '[' -z 2360649 ']' 00:11:56.613 15:54:36 -- common/autotest_common.sh@940 -- # kill -0 2360649 00:11:56.613 15:54:36 -- common/autotest_common.sh@941 -- # uname 00:11:56.613 15:54:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:56.613 15:54:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2360649 00:11:56.613 15:54:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:56.613 15:54:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:56.613 15:54:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2360649' 00:11:56.613 killing process with pid 2360649 00:11:56.613 15:54:36 -- common/autotest_common.sh@955 -- # kill 2360649 00:11:56.613 15:54:36 -- common/autotest_common.sh@960 -- # wait 2360649 00:11:57.990 15:54:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:57.990 15:54:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:57.991 15:54:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:57.991 15:54:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:57.991 15:54:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:57.991 15:54:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.991 15:54:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.991 15:54:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.898 15:54:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:59.898 00:11:59.898 real 0m20.117s 00:11:59.898 user 0m43.593s 00:11:59.898 sys 0m7.585s 00:11:59.898 15:54:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.898 15:54:39 -- common/autotest_common.sh@10 -- # set +x 00:11:59.898 ************************************ 00:11:59.898 END TEST nvmf_connect_stress 00:11:59.898 ************************************ 00:11:59.898 15:54:39 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:59.898 15:54:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:59.898 15:54:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.898 15:54:39 -- common/autotest_common.sh@10 -- # set +x 00:12:00.157 ************************************ 00:12:00.157 START TEST nvmf_fused_ordering 00:12:00.157 ************************************ 00:12:00.158 15:54:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:00.158 * Looking for test storage... 00:12:00.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.158 15:54:39 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.158 15:54:39 -- nvmf/common.sh@7 -- # uname -s 00:12:00.158 15:54:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.158 15:54:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.158 15:54:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.158 15:54:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.158 15:54:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.158 15:54:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.158 15:54:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.158 15:54:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.158 15:54:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.158 15:54:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.158 15:54:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.158 15:54:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.158 15:54:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.158 15:54:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.158 15:54:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.158 15:54:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.158 15:54:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.158 15:54:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.158 15:54:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.158 15:54:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.158 15:54:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.158 15:54:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.158 15:54:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.158 15:54:39 -- paths/export.sh@5 -- # export PATH 00:12:00.158 15:54:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.158 15:54:39 -- nvmf/common.sh@47 -- # : 0 00:12:00.158 15:54:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.158 15:54:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.158 15:54:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.158 15:54:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.158 15:54:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.158 15:54:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.158 15:54:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.158 15:54:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.158 15:54:39 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:00.158 15:54:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:00.158 15:54:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.158 15:54:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:00.158 15:54:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:00.158 15:54:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:00.158 15:54:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.158 15:54:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.158 15:54:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.158 15:54:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:00.158 15:54:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:00.158 15:54:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:00.158 15:54:39 -- common/autotest_common.sh@10 -- # set +x 00:12:05.546 15:54:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:05.546 15:54:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.546 15:54:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.546 15:54:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.546 15:54:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.546 15:54:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.546 15:54:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.546 15:54:44 -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.546 15:54:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.546 15:54:44 -- nvmf/common.sh@296 -- # e810=() 00:12:05.546 15:54:44 -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.546 15:54:44 -- nvmf/common.sh@297 -- # x722=() 00:12:05.546 15:54:44 -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.546 15:54:44 -- nvmf/common.sh@298 -- # mlx=() 00:12:05.546 15:54:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.546 15:54:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.546 15:54:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.546 15:54:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.546 15:54:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.546 15:54:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.546 15:54:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.546 15:54:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.546 15:54:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.546 15:54:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.547 15:54:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.547 15:54:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.547 15:54:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.547 15:54:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:05.547 15:54:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.547 15:54:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.547 15:54:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:05.547 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:05.547 15:54:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.547 15:54:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:05.547 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:05.547 15:54:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.547 15:54:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.547 15:54:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.547 15:54:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:05.547 15:54:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.547 15:54:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:05.547 Found net devices under 0000:86:00.0: cvl_0_0 00:12:05.547 15:54:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.547 15:54:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.547 15:54:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.547 15:54:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:05.547 15:54:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.547 15:54:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:05.547 Found net devices under 0000:86:00.1: cvl_0_1 00:12:05.547 15:54:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.547 15:54:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:05.547 15:54:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:05.547 15:54:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:05.547 15:54:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:05.547 15:54:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.547 15:54:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.547 15:54:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.547 15:54:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:05.547 15:54:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.547 15:54:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.547 15:54:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:05.547 15:54:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.547 15:54:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.547 15:54:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:05.547 15:54:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:05.547 15:54:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.547 15:54:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.547 15:54:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.547 15:54:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.547 15:54:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:05.547 15:54:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.547 15:54:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.547 15:54:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.547 15:54:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:12:05.547 00:12:05.547 --- 10.0.0.2 ping statistics --- 00:12:05.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.547 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:12:05.547 15:54:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:12:05.547 00:12:05.547 --- 10.0.0.1 ping statistics --- 00:12:05.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.547 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:12:05.547 15:54:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.547 15:54:45 -- nvmf/common.sh@411 -- # return 0 00:12:05.547 15:54:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:05.547 15:54:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.547 15:54:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:05.547 15:54:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:05.547 15:54:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.547 15:54:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:05.547 15:54:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:05.547 15:54:45 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:05.547 15:54:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:05.547 15:54:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:05.547 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:05.547 15:54:45 -- nvmf/common.sh@470 -- # nvmfpid=2366279 00:12:05.547 15:54:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:05.547 15:54:45 -- nvmf/common.sh@471 -- # waitforlisten 2366279 00:12:05.547 15:54:45 -- common/autotest_common.sh@817 -- # '[' -z 2366279 ']' 00:12:05.547 15:54:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.547 15:54:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.547 15:54:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.547 15:54:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.547 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:05.547 [2024-04-26 15:54:45.149210] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:05.547 [2024-04-26 15:54:45.149295] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.547 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.806 [2024-04-26 15:54:45.259291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.807 [2024-04-26 15:54:45.476264] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.807 [2024-04-26 15:54:45.476312] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.807 [2024-04-26 15:54:45.476325] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.807 [2024-04-26 15:54:45.476337] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.807 [2024-04-26 15:54:45.476350] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.807 [2024-04-26 15:54:45.476408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.376 15:54:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:06.376 15:54:45 -- common/autotest_common.sh@850 -- # return 0 00:12:06.376 15:54:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:06.376 15:54:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:06.376 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:06.376 15:54:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.376 15:54:45 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.376 15:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.376 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:06.376 [2024-04-26 15:54:45.959909] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.376 15:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.376 15:54:45 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:06.376 15:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.376 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:06.376 15:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.376 15:54:45 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.376 15:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.376 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:06.376 [2024-04-26 15:54:45.976085] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.376 15:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.376 15:54:45 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:06.376 15:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.377 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:06.377 NULL1 00:12:06.377 15:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.377 15:54:45 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:06.377 15:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.377 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:06.377 15:54:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.377 15:54:45 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:06.377 15:54:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.377 15:54:45 -- common/autotest_common.sh@10 -- # set +x 00:12:06.377 15:54:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.377 15:54:46 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:06.377 [2024-04-26 15:54:46.044880] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:06.377 [2024-04-26 15:54:46.044934] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366428 ] 00:12:06.636 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.571 Attached to nqn.2016-06.io.spdk:cnode1 00:12:07.571 Namespace ID: 1 size: 1GB 00:12:07.571 fused_ordering(0) 00:12:07.571 fused_ordering(1) 00:12:07.571 fused_ordering(2) 00:12:07.571 fused_ordering(3) 00:12:07.572 fused_ordering(4) 00:12:07.572 fused_ordering(5) 00:12:07.572 fused_ordering(6) 00:12:07.572 fused_ordering(7) 00:12:07.572 fused_ordering(8) 00:12:07.572 fused_ordering(9) 00:12:07.572 fused_ordering(10) 00:12:07.572 fused_ordering(11) 00:12:07.572 fused_ordering(12) 00:12:07.572 fused_ordering(13) 00:12:07.572 fused_ordering(14) 00:12:07.572 fused_ordering(15) 00:12:07.572 fused_ordering(16) 00:12:07.572 fused_ordering(17) 00:12:07.572 fused_ordering(18) 00:12:07.572 fused_ordering(19) 00:12:07.572 fused_ordering(20) 00:12:07.572 fused_ordering(21) 00:12:07.572 fused_ordering(22) 00:12:07.572 fused_ordering(23) 00:12:07.572 fused_ordering(24) 00:12:07.572 fused_ordering(25) 00:12:07.572 fused_ordering(26) 00:12:07.572 fused_ordering(27) 00:12:07.572 fused_ordering(28) 00:12:07.572 fused_ordering(29) 00:12:07.572 fused_ordering(30) 00:12:07.572 fused_ordering(31) 00:12:07.572 fused_ordering(32) 00:12:07.572 fused_ordering(33) 00:12:07.572 fused_ordering(34) 00:12:07.572 fused_ordering(35) 00:12:07.572 fused_ordering(36) 00:12:07.572 fused_ordering(37) 00:12:07.572 fused_ordering(38) 00:12:07.572 fused_ordering(39) 00:12:07.572 fused_ordering(40) 00:12:07.572 fused_ordering(41) 00:12:07.572 fused_ordering(42) 00:12:07.572 fused_ordering(43) 00:12:07.572 fused_ordering(44) 00:12:07.572 fused_ordering(45) 00:12:07.572 fused_ordering(46) 00:12:07.572 fused_ordering(47) 00:12:07.572 fused_ordering(48) 00:12:07.572 fused_ordering(49) 00:12:07.572 fused_ordering(50) 00:12:07.572 fused_ordering(51) 00:12:07.572 fused_ordering(52) 00:12:07.572 fused_ordering(53) 00:12:07.572 fused_ordering(54) 00:12:07.572 fused_ordering(55) 00:12:07.572 fused_ordering(56) 00:12:07.572 fused_ordering(57) 00:12:07.572 fused_ordering(58) 00:12:07.572 fused_ordering(59) 00:12:07.572 fused_ordering(60) 00:12:07.572 fused_ordering(61) 00:12:07.572 fused_ordering(62) 00:12:07.572 fused_ordering(63) 00:12:07.572 fused_ordering(64) 00:12:07.572 fused_ordering(65) 00:12:07.572 fused_ordering(66) 00:12:07.572 fused_ordering(67) 00:12:07.572 fused_ordering(68) 00:12:07.572 fused_ordering(69) 00:12:07.572 fused_ordering(70) 00:12:07.572 fused_ordering(71) 00:12:07.572 fused_ordering(72) 00:12:07.572 fused_ordering(73) 00:12:07.572 fused_ordering(74) 00:12:07.572 fused_ordering(75) 00:12:07.572 fused_ordering(76) 00:12:07.572 fused_ordering(77) 00:12:07.572 fused_ordering(78) 00:12:07.572 fused_ordering(79) 00:12:07.572 fused_ordering(80) 00:12:07.572 fused_ordering(81) 00:12:07.572 fused_ordering(82) 00:12:07.572 fused_ordering(83) 00:12:07.572 fused_ordering(84) 00:12:07.572 fused_ordering(85) 00:12:07.572 fused_ordering(86) 00:12:07.572 fused_ordering(87) 00:12:07.572 fused_ordering(88) 00:12:07.572 fused_ordering(89) 00:12:07.572 fused_ordering(90) 00:12:07.572 fused_ordering(91) 00:12:07.572 fused_ordering(92) 00:12:07.572 fused_ordering(93) 00:12:07.572 fused_ordering(94) 00:12:07.572 fused_ordering(95) 00:12:07.572 fused_ordering(96) 00:12:07.572 fused_ordering(97) 00:12:07.572 fused_ordering(98) 00:12:07.572 fused_ordering(99) 00:12:07.572 fused_ordering(100) 00:12:07.572 fused_ordering(101) 00:12:07.572 fused_ordering(102) 00:12:07.572 fused_ordering(103) 00:12:07.572 fused_ordering(104) 00:12:07.572 fused_ordering(105) 00:12:07.572 fused_ordering(106) 00:12:07.572 fused_ordering(107) 00:12:07.572 fused_ordering(108) 00:12:07.572 fused_ordering(109) 00:12:07.572 fused_ordering(110) 00:12:07.572 fused_ordering(111) 00:12:07.572 fused_ordering(112) 00:12:07.572 fused_ordering(113) 00:12:07.572 fused_ordering(114) 00:12:07.572 fused_ordering(115) 00:12:07.572 fused_ordering(116) 00:12:07.572 fused_ordering(117) 00:12:07.572 fused_ordering(118) 00:12:07.572 fused_ordering(119) 00:12:07.572 fused_ordering(120) 00:12:07.572 fused_ordering(121) 00:12:07.572 fused_ordering(122) 00:12:07.572 fused_ordering(123) 00:12:07.572 fused_ordering(124) 00:12:07.572 fused_ordering(125) 00:12:07.572 fused_ordering(126) 00:12:07.572 fused_ordering(127) 00:12:07.572 fused_ordering(128) 00:12:07.572 fused_ordering(129) 00:12:07.572 fused_ordering(130) 00:12:07.572 fused_ordering(131) 00:12:07.572 fused_ordering(132) 00:12:07.572 fused_ordering(133) 00:12:07.572 fused_ordering(134) 00:12:07.572 fused_ordering(135) 00:12:07.572 fused_ordering(136) 00:12:07.572 fused_ordering(137) 00:12:07.572 fused_ordering(138) 00:12:07.572 fused_ordering(139) 00:12:07.572 fused_ordering(140) 00:12:07.572 fused_ordering(141) 00:12:07.572 fused_ordering(142) 00:12:07.572 fused_ordering(143) 00:12:07.572 fused_ordering(144) 00:12:07.572 fused_ordering(145) 00:12:07.572 fused_ordering(146) 00:12:07.572 fused_ordering(147) 00:12:07.572 fused_ordering(148) 00:12:07.572 fused_ordering(149) 00:12:07.572 fused_ordering(150) 00:12:07.572 fused_ordering(151) 00:12:07.572 fused_ordering(152) 00:12:07.572 fused_ordering(153) 00:12:07.572 fused_ordering(154) 00:12:07.572 fused_ordering(155) 00:12:07.572 fused_ordering(156) 00:12:07.572 fused_ordering(157) 00:12:07.572 fused_ordering(158) 00:12:07.572 fused_ordering(159) 00:12:07.572 fused_ordering(160) 00:12:07.572 fused_ordering(161) 00:12:07.572 fused_ordering(162) 00:12:07.572 fused_ordering(163) 00:12:07.572 fused_ordering(164) 00:12:07.572 fused_ordering(165) 00:12:07.572 fused_ordering(166) 00:12:07.572 fused_ordering(167) 00:12:07.572 fused_ordering(168) 00:12:07.572 fused_ordering(169) 00:12:07.572 fused_ordering(170) 00:12:07.572 fused_ordering(171) 00:12:07.572 fused_ordering(172) 00:12:07.572 fused_ordering(173) 00:12:07.572 fused_ordering(174) 00:12:07.572 fused_ordering(175) 00:12:07.572 fused_ordering(176) 00:12:07.572 fused_ordering(177) 00:12:07.572 fused_ordering(178) 00:12:07.572 fused_ordering(179) 00:12:07.572 fused_ordering(180) 00:12:07.572 fused_ordering(181) 00:12:07.572 fused_ordering(182) 00:12:07.572 fused_ordering(183) 00:12:07.572 fused_ordering(184) 00:12:07.572 fused_ordering(185) 00:12:07.572 fused_ordering(186) 00:12:07.572 fused_ordering(187) 00:12:07.572 fused_ordering(188) 00:12:07.572 fused_ordering(189) 00:12:07.572 fused_ordering(190) 00:12:07.572 fused_ordering(191) 00:12:07.572 fused_ordering(192) 00:12:07.572 fused_ordering(193) 00:12:07.572 fused_ordering(194) 00:12:07.572 fused_ordering(195) 00:12:07.572 fused_ordering(196) 00:12:07.572 fused_ordering(197) 00:12:07.572 fused_ordering(198) 00:12:07.572 fused_ordering(199) 00:12:07.572 fused_ordering(200) 00:12:07.572 fused_ordering(201) 00:12:07.572 fused_ordering(202) 00:12:07.572 fused_ordering(203) 00:12:07.572 fused_ordering(204) 00:12:07.572 fused_ordering(205) 00:12:08.139 fused_ordering(206) 00:12:08.139 fused_ordering(207) 00:12:08.139 fused_ordering(208) 00:12:08.139 fused_ordering(209) 00:12:08.139 fused_ordering(210) 00:12:08.139 fused_ordering(211) 00:12:08.139 fused_ordering(212) 00:12:08.139 fused_ordering(213) 00:12:08.139 fused_ordering(214) 00:12:08.139 fused_ordering(215) 00:12:08.139 fused_ordering(216) 00:12:08.139 fused_ordering(217) 00:12:08.139 fused_ordering(218) 00:12:08.139 fused_ordering(219) 00:12:08.139 fused_ordering(220) 00:12:08.139 fused_ordering(221) 00:12:08.139 fused_ordering(222) 00:12:08.139 fused_ordering(223) 00:12:08.139 fused_ordering(224) 00:12:08.139 fused_ordering(225) 00:12:08.139 fused_ordering(226) 00:12:08.139 fused_ordering(227) 00:12:08.139 fused_ordering(228) 00:12:08.139 fused_ordering(229) 00:12:08.139 fused_ordering(230) 00:12:08.139 fused_ordering(231) 00:12:08.139 fused_ordering(232) 00:12:08.139 fused_ordering(233) 00:12:08.139 fused_ordering(234) 00:12:08.139 fused_ordering(235) 00:12:08.139 fused_ordering(236) 00:12:08.139 fused_ordering(237) 00:12:08.139 fused_ordering(238) 00:12:08.139 fused_ordering(239) 00:12:08.139 fused_ordering(240) 00:12:08.139 fused_ordering(241) 00:12:08.139 fused_ordering(242) 00:12:08.139 fused_ordering(243) 00:12:08.139 fused_ordering(244) 00:12:08.139 fused_ordering(245) 00:12:08.139 fused_ordering(246) 00:12:08.139 fused_ordering(247) 00:12:08.139 fused_ordering(248) 00:12:08.139 fused_ordering(249) 00:12:08.139 fused_ordering(250) 00:12:08.139 fused_ordering(251) 00:12:08.140 fused_ordering(252) 00:12:08.140 fused_ordering(253) 00:12:08.140 fused_ordering(254) 00:12:08.140 fused_ordering(255) 00:12:08.140 fused_ordering(256) 00:12:08.140 fused_ordering(257) 00:12:08.140 fused_ordering(258) 00:12:08.140 fused_ordering(259) 00:12:08.140 fused_ordering(260) 00:12:08.140 fused_ordering(261) 00:12:08.140 fused_ordering(262) 00:12:08.140 fused_ordering(263) 00:12:08.140 fused_ordering(264) 00:12:08.140 fused_ordering(265) 00:12:08.140 fused_ordering(266) 00:12:08.140 fused_ordering(267) 00:12:08.140 fused_ordering(268) 00:12:08.140 fused_ordering(269) 00:12:08.140 fused_ordering(270) 00:12:08.140 fused_ordering(271) 00:12:08.140 fused_ordering(272) 00:12:08.140 fused_ordering(273) 00:12:08.140 fused_ordering(274) 00:12:08.140 fused_ordering(275) 00:12:08.140 fused_ordering(276) 00:12:08.140 fused_ordering(277) 00:12:08.140 fused_ordering(278) 00:12:08.140 fused_ordering(279) 00:12:08.140 fused_ordering(280) 00:12:08.140 fused_ordering(281) 00:12:08.140 fused_ordering(282) 00:12:08.140 fused_ordering(283) 00:12:08.140 fused_ordering(284) 00:12:08.140 fused_ordering(285) 00:12:08.140 fused_ordering(286) 00:12:08.140 fused_ordering(287) 00:12:08.140 fused_ordering(288) 00:12:08.140 fused_ordering(289) 00:12:08.140 fused_ordering(290) 00:12:08.140 fused_ordering(291) 00:12:08.140 fused_ordering(292) 00:12:08.140 fused_ordering(293) 00:12:08.140 fused_ordering(294) 00:12:08.140 fused_ordering(295) 00:12:08.140 fused_ordering(296) 00:12:08.140 fused_ordering(297) 00:12:08.140 fused_ordering(298) 00:12:08.140 fused_ordering(299) 00:12:08.140 fused_ordering(300) 00:12:08.140 fused_ordering(301) 00:12:08.140 fused_ordering(302) 00:12:08.140 fused_ordering(303) 00:12:08.140 fused_ordering(304) 00:12:08.140 fused_ordering(305) 00:12:08.140 fused_ordering(306) 00:12:08.140 fused_ordering(307) 00:12:08.140 fused_ordering(308) 00:12:08.140 fused_ordering(309) 00:12:08.140 fused_ordering(310) 00:12:08.140 fused_ordering(311) 00:12:08.140 fused_ordering(312) 00:12:08.140 fused_ordering(313) 00:12:08.140 fused_ordering(314) 00:12:08.140 fused_ordering(315) 00:12:08.140 fused_ordering(316) 00:12:08.140 fused_ordering(317) 00:12:08.140 fused_ordering(318) 00:12:08.140 fused_ordering(319) 00:12:08.140 fused_ordering(320) 00:12:08.140 fused_ordering(321) 00:12:08.140 fused_ordering(322) 00:12:08.140 fused_ordering(323) 00:12:08.140 fused_ordering(324) 00:12:08.140 fused_ordering(325) 00:12:08.140 fused_ordering(326) 00:12:08.140 fused_ordering(327) 00:12:08.140 fused_ordering(328) 00:12:08.140 fused_ordering(329) 00:12:08.140 fused_ordering(330) 00:12:08.140 fused_ordering(331) 00:12:08.140 fused_ordering(332) 00:12:08.140 fused_ordering(333) 00:12:08.140 fused_ordering(334) 00:12:08.140 fused_ordering(335) 00:12:08.140 fused_ordering(336) 00:12:08.140 fused_ordering(337) 00:12:08.140 fused_ordering(338) 00:12:08.140 fused_ordering(339) 00:12:08.140 fused_ordering(340) 00:12:08.140 fused_ordering(341) 00:12:08.140 fused_ordering(342) 00:12:08.140 fused_ordering(343) 00:12:08.140 fused_ordering(344) 00:12:08.140 fused_ordering(345) 00:12:08.140 fused_ordering(346) 00:12:08.140 fused_ordering(347) 00:12:08.140 fused_ordering(348) 00:12:08.140 fused_ordering(349) 00:12:08.140 fused_ordering(350) 00:12:08.140 fused_ordering(351) 00:12:08.140 fused_ordering(352) 00:12:08.140 fused_ordering(353) 00:12:08.140 fused_ordering(354) 00:12:08.140 fused_ordering(355) 00:12:08.140 fused_ordering(356) 00:12:08.140 fused_ordering(357) 00:12:08.140 fused_ordering(358) 00:12:08.140 fused_ordering(359) 00:12:08.140 fused_ordering(360) 00:12:08.140 fused_ordering(361) 00:12:08.140 fused_ordering(362) 00:12:08.140 fused_ordering(363) 00:12:08.140 fused_ordering(364) 00:12:08.140 fused_ordering(365) 00:12:08.140 fused_ordering(366) 00:12:08.140 fused_ordering(367) 00:12:08.140 fused_ordering(368) 00:12:08.140 fused_ordering(369) 00:12:08.140 fused_ordering(370) 00:12:08.140 fused_ordering(371) 00:12:08.140 fused_ordering(372) 00:12:08.140 fused_ordering(373) 00:12:08.140 fused_ordering(374) 00:12:08.140 fused_ordering(375) 00:12:08.140 fused_ordering(376) 00:12:08.140 fused_ordering(377) 00:12:08.140 fused_ordering(378) 00:12:08.140 fused_ordering(379) 00:12:08.140 fused_ordering(380) 00:12:08.140 fused_ordering(381) 00:12:08.140 fused_ordering(382) 00:12:08.140 fused_ordering(383) 00:12:08.140 fused_ordering(384) 00:12:08.140 fused_ordering(385) 00:12:08.140 fused_ordering(386) 00:12:08.140 fused_ordering(387) 00:12:08.140 fused_ordering(388) 00:12:08.140 fused_ordering(389) 00:12:08.140 fused_ordering(390) 00:12:08.140 fused_ordering(391) 00:12:08.140 fused_ordering(392) 00:12:08.140 fused_ordering(393) 00:12:08.140 fused_ordering(394) 00:12:08.140 fused_ordering(395) 00:12:08.140 fused_ordering(396) 00:12:08.140 fused_ordering(397) 00:12:08.140 fused_ordering(398) 00:12:08.140 fused_ordering(399) 00:12:08.140 fused_ordering(400) 00:12:08.140 fused_ordering(401) 00:12:08.140 fused_ordering(402) 00:12:08.140 fused_ordering(403) 00:12:08.140 fused_ordering(404) 00:12:08.140 fused_ordering(405) 00:12:08.140 fused_ordering(406) 00:12:08.140 fused_ordering(407) 00:12:08.140 fused_ordering(408) 00:12:08.140 fused_ordering(409) 00:12:08.140 fused_ordering(410) 00:12:09.076 fused_ordering(411) 00:12:09.076 fused_ordering(412) 00:12:09.076 fused_ordering(413) 00:12:09.076 fused_ordering(414) 00:12:09.076 fused_ordering(415) 00:12:09.076 fused_ordering(416) 00:12:09.076 fused_ordering(417) 00:12:09.076 fused_ordering(418) 00:12:09.076 fused_ordering(419) 00:12:09.076 fused_ordering(420) 00:12:09.076 fused_ordering(421) 00:12:09.076 fused_ordering(422) 00:12:09.076 fused_ordering(423) 00:12:09.076 fused_ordering(424) 00:12:09.076 fused_ordering(425) 00:12:09.077 fused_ordering(426) 00:12:09.077 fused_ordering(427) 00:12:09.077 fused_ordering(428) 00:12:09.077 fused_ordering(429) 00:12:09.077 fused_ordering(430) 00:12:09.077 fused_ordering(431) 00:12:09.077 fused_ordering(432) 00:12:09.077 fused_ordering(433) 00:12:09.077 fused_ordering(434) 00:12:09.077 fused_ordering(435) 00:12:09.077 fused_ordering(436) 00:12:09.077 fused_ordering(437) 00:12:09.077 fused_ordering(438) 00:12:09.077 fused_ordering(439) 00:12:09.077 fused_ordering(440) 00:12:09.077 fused_ordering(441) 00:12:09.077 fused_ordering(442) 00:12:09.077 fused_ordering(443) 00:12:09.077 fused_ordering(444) 00:12:09.077 fused_ordering(445) 00:12:09.077 fused_ordering(446) 00:12:09.077 fused_ordering(447) 00:12:09.077 fused_ordering(448) 00:12:09.077 fused_ordering(449) 00:12:09.077 fused_ordering(450) 00:12:09.077 fused_ordering(451) 00:12:09.077 fused_ordering(452) 00:12:09.077 fused_ordering(453) 00:12:09.077 fused_ordering(454) 00:12:09.077 fused_ordering(455) 00:12:09.077 fused_ordering(456) 00:12:09.077 fused_ordering(457) 00:12:09.077 fused_ordering(458) 00:12:09.077 fused_ordering(459) 00:12:09.077 fused_ordering(460) 00:12:09.077 fused_ordering(461) 00:12:09.077 fused_ordering(462) 00:12:09.077 fused_ordering(463) 00:12:09.077 fused_ordering(464) 00:12:09.077 fused_ordering(465) 00:12:09.077 fused_ordering(466) 00:12:09.077 fused_ordering(467) 00:12:09.077 fused_ordering(468) 00:12:09.077 fused_ordering(469) 00:12:09.077 fused_ordering(470) 00:12:09.077 fused_ordering(471) 00:12:09.077 fused_ordering(472) 00:12:09.077 fused_ordering(473) 00:12:09.077 fused_ordering(474) 00:12:09.077 fused_ordering(475) 00:12:09.077 fused_ordering(476) 00:12:09.077 fused_ordering(477) 00:12:09.077 fused_ordering(478) 00:12:09.077 fused_ordering(479) 00:12:09.077 fused_ordering(480) 00:12:09.077 fused_ordering(481) 00:12:09.077 fused_ordering(482) 00:12:09.077 fused_ordering(483) 00:12:09.077 fused_ordering(484) 00:12:09.077 fused_ordering(485) 00:12:09.077 fused_ordering(486) 00:12:09.077 fused_ordering(487) 00:12:09.077 fused_ordering(488) 00:12:09.077 fused_ordering(489) 00:12:09.077 fused_ordering(490) 00:12:09.077 fused_ordering(491) 00:12:09.077 fused_ordering(492) 00:12:09.077 fused_ordering(493) 00:12:09.077 fused_ordering(494) 00:12:09.077 fused_ordering(495) 00:12:09.077 fused_ordering(496) 00:12:09.077 fused_ordering(497) 00:12:09.077 fused_ordering(498) 00:12:09.077 fused_ordering(499) 00:12:09.077 fused_ordering(500) 00:12:09.077 fused_ordering(501) 00:12:09.077 fused_ordering(502) 00:12:09.077 fused_ordering(503) 00:12:09.077 fused_ordering(504) 00:12:09.077 fused_ordering(505) 00:12:09.077 fused_ordering(506) 00:12:09.077 fused_ordering(507) 00:12:09.077 fused_ordering(508) 00:12:09.077 fused_ordering(509) 00:12:09.077 fused_ordering(510) 00:12:09.077 fused_ordering(511) 00:12:09.077 fused_ordering(512) 00:12:09.077 fused_ordering(513) 00:12:09.077 fused_ordering(514) 00:12:09.077 fused_ordering(515) 00:12:09.077 fused_ordering(516) 00:12:09.077 fused_ordering(517) 00:12:09.077 fused_ordering(518) 00:12:09.077 fused_ordering(519) 00:12:09.077 fused_ordering(520) 00:12:09.077 fused_ordering(521) 00:12:09.077 fused_ordering(522) 00:12:09.077 fused_ordering(523) 00:12:09.077 fused_ordering(524) 00:12:09.077 fused_ordering(525) 00:12:09.077 fused_ordering(526) 00:12:09.077 fused_ordering(527) 00:12:09.077 fused_ordering(528) 00:12:09.077 fused_ordering(529) 00:12:09.077 fused_ordering(530) 00:12:09.077 fused_ordering(531) 00:12:09.077 fused_ordering(532) 00:12:09.077 fused_ordering(533) 00:12:09.077 fused_ordering(534) 00:12:09.077 fused_ordering(535) 00:12:09.077 fused_ordering(536) 00:12:09.077 fused_ordering(537) 00:12:09.077 fused_ordering(538) 00:12:09.077 fused_ordering(539) 00:12:09.077 fused_ordering(540) 00:12:09.077 fused_ordering(541) 00:12:09.077 fused_ordering(542) 00:12:09.077 fused_ordering(543) 00:12:09.077 fused_ordering(544) 00:12:09.077 fused_ordering(545) 00:12:09.077 fused_ordering(546) 00:12:09.077 fused_ordering(547) 00:12:09.077 fused_ordering(548) 00:12:09.077 fused_ordering(549) 00:12:09.077 fused_ordering(550) 00:12:09.077 fused_ordering(551) 00:12:09.077 fused_ordering(552) 00:12:09.077 fused_ordering(553) 00:12:09.077 fused_ordering(554) 00:12:09.077 fused_ordering(555) 00:12:09.077 fused_ordering(556) 00:12:09.077 fused_ordering(557) 00:12:09.077 fused_ordering(558) 00:12:09.077 fused_ordering(559) 00:12:09.077 fused_ordering(560) 00:12:09.077 fused_ordering(561) 00:12:09.077 fused_ordering(562) 00:12:09.077 fused_ordering(563) 00:12:09.077 fused_ordering(564) 00:12:09.077 fused_ordering(565) 00:12:09.077 fused_ordering(566) 00:12:09.077 fused_ordering(567) 00:12:09.077 fused_ordering(568) 00:12:09.077 fused_ordering(569) 00:12:09.077 fused_ordering(570) 00:12:09.077 fused_ordering(571) 00:12:09.077 fused_ordering(572) 00:12:09.077 fused_ordering(573) 00:12:09.077 fused_ordering(574) 00:12:09.077 fused_ordering(575) 00:12:09.077 fused_ordering(576) 00:12:09.077 fused_ordering(577) 00:12:09.077 fused_ordering(578) 00:12:09.077 fused_ordering(579) 00:12:09.077 fused_ordering(580) 00:12:09.077 fused_ordering(581) 00:12:09.077 fused_ordering(582) 00:12:09.077 fused_ordering(583) 00:12:09.077 fused_ordering(584) 00:12:09.077 fused_ordering(585) 00:12:09.077 fused_ordering(586) 00:12:09.077 fused_ordering(587) 00:12:09.077 fused_ordering(588) 00:12:09.077 fused_ordering(589) 00:12:09.077 fused_ordering(590) 00:12:09.077 fused_ordering(591) 00:12:09.077 fused_ordering(592) 00:12:09.077 fused_ordering(593) 00:12:09.077 fused_ordering(594) 00:12:09.077 fused_ordering(595) 00:12:09.077 fused_ordering(596) 00:12:09.077 fused_ordering(597) 00:12:09.077 fused_ordering(598) 00:12:09.077 fused_ordering(599) 00:12:09.077 fused_ordering(600) 00:12:09.077 fused_ordering(601) 00:12:09.077 fused_ordering(602) 00:12:09.077 fused_ordering(603) 00:12:09.077 fused_ordering(604) 00:12:09.077 fused_ordering(605) 00:12:09.077 fused_ordering(606) 00:12:09.077 fused_ordering(607) 00:12:09.077 fused_ordering(608) 00:12:09.077 fused_ordering(609) 00:12:09.077 fused_ordering(610) 00:12:09.077 fused_ordering(611) 00:12:09.077 fused_ordering(612) 00:12:09.077 fused_ordering(613) 00:12:09.077 fused_ordering(614) 00:12:09.077 fused_ordering(615) 00:12:09.645 fused_ordering(616) 00:12:09.645 fused_ordering(617) 00:12:09.645 fused_ordering(618) 00:12:09.645 fused_ordering(619) 00:12:09.645 fused_ordering(620) 00:12:09.645 fused_ordering(621) 00:12:09.645 fused_ordering(622) 00:12:09.645 fused_ordering(623) 00:12:09.645 fused_ordering(624) 00:12:09.645 fused_ordering(625) 00:12:09.645 fused_ordering(626) 00:12:09.645 fused_ordering(627) 00:12:09.645 fused_ordering(628) 00:12:09.645 fused_ordering(629) 00:12:09.645 fused_ordering(630) 00:12:09.645 fused_ordering(631) 00:12:09.645 fused_ordering(632) 00:12:09.645 fused_ordering(633) 00:12:09.645 fused_ordering(634) 00:12:09.645 fused_ordering(635) 00:12:09.645 fused_ordering(636) 00:12:09.645 fused_ordering(637) 00:12:09.645 fused_ordering(638) 00:12:09.645 fused_ordering(639) 00:12:09.645 fused_ordering(640) 00:12:09.645 fused_ordering(641) 00:12:09.645 fused_ordering(642) 00:12:09.645 fused_ordering(643) 00:12:09.645 fused_ordering(644) 00:12:09.645 fused_ordering(645) 00:12:09.645 fused_ordering(646) 00:12:09.645 fused_ordering(647) 00:12:09.645 fused_ordering(648) 00:12:09.645 fused_ordering(649) 00:12:09.645 fused_ordering(650) 00:12:09.645 fused_ordering(651) 00:12:09.645 fused_ordering(652) 00:12:09.645 fused_ordering(653) 00:12:09.645 fused_ordering(654) 00:12:09.645 fused_ordering(655) 00:12:09.645 fused_ordering(656) 00:12:09.645 fused_ordering(657) 00:12:09.645 fused_ordering(658) 00:12:09.645 fused_ordering(659) 00:12:09.645 fused_ordering(660) 00:12:09.645 fused_ordering(661) 00:12:09.645 fused_ordering(662) 00:12:09.645 fused_ordering(663) 00:12:09.645 fused_ordering(664) 00:12:09.645 fused_ordering(665) 00:12:09.645 fused_ordering(666) 00:12:09.645 fused_ordering(667) 00:12:09.645 fused_ordering(668) 00:12:09.645 fused_ordering(669) 00:12:09.645 fused_ordering(670) 00:12:09.645 fused_ordering(671) 00:12:09.645 fused_ordering(672) 00:12:09.645 fused_ordering(673) 00:12:09.645 fused_ordering(674) 00:12:09.645 fused_ordering(675) 00:12:09.645 fused_ordering(676) 00:12:09.645 fused_ordering(677) 00:12:09.645 fused_ordering(678) 00:12:09.645 fused_ordering(679) 00:12:09.645 fused_ordering(680) 00:12:09.645 fused_ordering(681) 00:12:09.645 fused_ordering(682) 00:12:09.645 fused_ordering(683) 00:12:09.645 fused_ordering(684) 00:12:09.645 fused_ordering(685) 00:12:09.645 fused_ordering(686) 00:12:09.645 fused_ordering(687) 00:12:09.645 fused_ordering(688) 00:12:09.645 fused_ordering(689) 00:12:09.645 fused_ordering(690) 00:12:09.645 fused_ordering(691) 00:12:09.645 fused_ordering(692) 00:12:09.645 fused_ordering(693) 00:12:09.645 fused_ordering(694) 00:12:09.645 fused_ordering(695) 00:12:09.645 fused_ordering(696) 00:12:09.645 fused_ordering(697) 00:12:09.645 fused_ordering(698) 00:12:09.645 fused_ordering(699) 00:12:09.645 fused_ordering(700) 00:12:09.645 fused_ordering(701) 00:12:09.645 fused_ordering(702) 00:12:09.645 fused_ordering(703) 00:12:09.645 fused_ordering(704) 00:12:09.645 fused_ordering(705) 00:12:09.645 fused_ordering(706) 00:12:09.645 fused_ordering(707) 00:12:09.645 fused_ordering(708) 00:12:09.645 fused_ordering(709) 00:12:09.645 fused_ordering(710) 00:12:09.645 fused_ordering(711) 00:12:09.645 fused_ordering(712) 00:12:09.645 fused_ordering(713) 00:12:09.645 fused_ordering(714) 00:12:09.645 fused_ordering(715) 00:12:09.645 fused_ordering(716) 00:12:09.645 fused_ordering(717) 00:12:09.645 fused_ordering(718) 00:12:09.645 fused_ordering(719) 00:12:09.645 fused_ordering(720) 00:12:09.645 fused_ordering(721) 00:12:09.645 fused_ordering(722) 00:12:09.645 fused_ordering(723) 00:12:09.645 fused_ordering(724) 00:12:09.645 fused_ordering(725) 00:12:09.645 fused_ordering(726) 00:12:09.645 fused_ordering(727) 00:12:09.645 fused_ordering(728) 00:12:09.645 fused_ordering(729) 00:12:09.645 fused_ordering(730) 00:12:09.645 fused_ordering(731) 00:12:09.645 fused_ordering(732) 00:12:09.645 fused_ordering(733) 00:12:09.645 fused_ordering(734) 00:12:09.645 fused_ordering(735) 00:12:09.645 fused_ordering(736) 00:12:09.645 fused_ordering(737) 00:12:09.645 fused_ordering(738) 00:12:09.645 fused_ordering(739) 00:12:09.645 fused_ordering(740) 00:12:09.645 fused_ordering(741) 00:12:09.645 fused_ordering(742) 00:12:09.645 fused_ordering(743) 00:12:09.645 fused_ordering(744) 00:12:09.645 fused_ordering(745) 00:12:09.645 fused_ordering(746) 00:12:09.645 fused_ordering(747) 00:12:09.645 fused_ordering(748) 00:12:09.645 fused_ordering(749) 00:12:09.645 fused_ordering(750) 00:12:09.645 fused_ordering(751) 00:12:09.645 fused_ordering(752) 00:12:09.645 fused_ordering(753) 00:12:09.645 fused_ordering(754) 00:12:09.645 fused_ordering(755) 00:12:09.645 fused_ordering(756) 00:12:09.645 fused_ordering(757) 00:12:09.645 fused_ordering(758) 00:12:09.645 fused_ordering(759) 00:12:09.645 fused_ordering(760) 00:12:09.645 fused_ordering(761) 00:12:09.645 fused_ordering(762) 00:12:09.645 fused_ordering(763) 00:12:09.645 fused_ordering(764) 00:12:09.645 fused_ordering(765) 00:12:09.645 fused_ordering(766) 00:12:09.645 fused_ordering(767) 00:12:09.645 fused_ordering(768) 00:12:09.645 fused_ordering(769) 00:12:09.645 fused_ordering(770) 00:12:09.645 fused_ordering(771) 00:12:09.645 fused_ordering(772) 00:12:09.645 fused_ordering(773) 00:12:09.645 fused_ordering(774) 00:12:09.645 fused_ordering(775) 00:12:09.645 fused_ordering(776) 00:12:09.645 fused_ordering(777) 00:12:09.645 fused_ordering(778) 00:12:09.645 fused_ordering(779) 00:12:09.645 fused_ordering(780) 00:12:09.645 fused_ordering(781) 00:12:09.645 fused_ordering(782) 00:12:09.645 fused_ordering(783) 00:12:09.645 fused_ordering(784) 00:12:09.645 fused_ordering(785) 00:12:09.645 fused_ordering(786) 00:12:09.645 fused_ordering(787) 00:12:09.645 fused_ordering(788) 00:12:09.645 fused_ordering(789) 00:12:09.645 fused_ordering(790) 00:12:09.645 fused_ordering(791) 00:12:09.645 fused_ordering(792) 00:12:09.645 fused_ordering(793) 00:12:09.645 fused_ordering(794) 00:12:09.645 fused_ordering(795) 00:12:09.645 fused_ordering(796) 00:12:09.645 fused_ordering(797) 00:12:09.645 fused_ordering(798) 00:12:09.645 fused_ordering(799) 00:12:09.645 fused_ordering(800) 00:12:09.645 fused_ordering(801) 00:12:09.645 fused_ordering(802) 00:12:09.645 fused_ordering(803) 00:12:09.645 fused_ordering(804) 00:12:09.645 fused_ordering(805) 00:12:09.645 fused_ordering(806) 00:12:09.645 fused_ordering(807) 00:12:09.645 fused_ordering(808) 00:12:09.645 fused_ordering(809) 00:12:09.645 fused_ordering(810) 00:12:09.645 fused_ordering(811) 00:12:09.645 fused_ordering(812) 00:12:09.645 fused_ordering(813) 00:12:09.645 fused_ordering(814) 00:12:09.645 fused_ordering(815) 00:12:09.645 fused_ordering(816) 00:12:09.645 fused_ordering(817) 00:12:09.645 fused_ordering(818) 00:12:09.645 fused_ordering(819) 00:12:09.645 fused_ordering(820) 00:12:10.582 fused_ordering(821) 00:12:10.582 fused_ordering(822) 00:12:10.582 fused_ordering(823) 00:12:10.582 fused_ordering(824) 00:12:10.582 fused_ordering(825) 00:12:10.582 fused_ordering(826) 00:12:10.582 fused_ordering(827) 00:12:10.582 fused_ordering(828) 00:12:10.582 fused_ordering(829) 00:12:10.582 fused_ordering(830) 00:12:10.582 fused_ordering(831) 00:12:10.582 fused_ordering(832) 00:12:10.582 fused_ordering(833) 00:12:10.582 fused_ordering(834) 00:12:10.582 fused_ordering(835) 00:12:10.582 fused_ordering(836) 00:12:10.582 fused_ordering(837) 00:12:10.582 fused_ordering(838) 00:12:10.582 fused_ordering(839) 00:12:10.582 fused_ordering(840) 00:12:10.582 fused_ordering(841) 00:12:10.582 fused_ordering(842) 00:12:10.582 fused_ordering(843) 00:12:10.582 fused_ordering(844) 00:12:10.582 fused_ordering(845) 00:12:10.582 fused_ordering(846) 00:12:10.582 fused_ordering(847) 00:12:10.582 fused_ordering(848) 00:12:10.582 fused_ordering(849) 00:12:10.582 fused_ordering(850) 00:12:10.582 fused_ordering(851) 00:12:10.582 fused_ordering(852) 00:12:10.582 fused_ordering(853) 00:12:10.582 fused_ordering(854) 00:12:10.582 fused_ordering(855) 00:12:10.582 fused_ordering(856) 00:12:10.582 fused_ordering(857) 00:12:10.582 fused_ordering(858) 00:12:10.582 fused_ordering(859) 00:12:10.582 fused_ordering(860) 00:12:10.582 fused_ordering(861) 00:12:10.582 fused_ordering(862) 00:12:10.582 fused_ordering(863) 00:12:10.582 fused_ordering(864) 00:12:10.582 fused_ordering(865) 00:12:10.582 fused_ordering(866) 00:12:10.582 fused_ordering(867) 00:12:10.582 fused_ordering(868) 00:12:10.582 fused_ordering(869) 00:12:10.582 fused_ordering(870) 00:12:10.582 fused_ordering(871) 00:12:10.582 fused_ordering(872) 00:12:10.582 fused_ordering(873) 00:12:10.582 fused_ordering(874) 00:12:10.582 fused_ordering(875) 00:12:10.582 fused_ordering(876) 00:12:10.582 fused_ordering(877) 00:12:10.582 fused_ordering(878) 00:12:10.582 fused_ordering(879) 00:12:10.582 fused_ordering(880) 00:12:10.582 fused_ordering(881) 00:12:10.582 fused_ordering(882) 00:12:10.582 fused_ordering(883) 00:12:10.582 fused_ordering(884) 00:12:10.582 fused_ordering(885) 00:12:10.582 fused_ordering(886) 00:12:10.582 fused_ordering(887) 00:12:10.582 fused_ordering(888) 00:12:10.582 fused_ordering(889) 00:12:10.582 fused_ordering(890) 00:12:10.582 fused_ordering(891) 00:12:10.582 fused_ordering(892) 00:12:10.582 fused_ordering(893) 00:12:10.582 fused_ordering(894) 00:12:10.582 fused_ordering(895) 00:12:10.582 fused_ordering(896) 00:12:10.582 fused_ordering(897) 00:12:10.582 fused_ordering(898) 00:12:10.582 fused_ordering(899) 00:12:10.582 fused_ordering(900) 00:12:10.582 fused_ordering(901) 00:12:10.582 fused_ordering(902) 00:12:10.582 fused_ordering(903) 00:12:10.582 fused_ordering(904) 00:12:10.582 fused_ordering(905) 00:12:10.582 fused_ordering(906) 00:12:10.582 fused_ordering(907) 00:12:10.582 fused_ordering(908) 00:12:10.582 fused_ordering(909) 00:12:10.582 fused_ordering(910) 00:12:10.582 fused_ordering(911) 00:12:10.582 fused_ordering(912) 00:12:10.582 fused_ordering(913) 00:12:10.582 fused_ordering(914) 00:12:10.582 fused_ordering(915) 00:12:10.582 fused_ordering(916) 00:12:10.582 fused_ordering(917) 00:12:10.582 fused_ordering(918) 00:12:10.582 fused_ordering(919) 00:12:10.582 fused_ordering(920) 00:12:10.582 fused_ordering(921) 00:12:10.582 fused_ordering(922) 00:12:10.582 fused_ordering(923) 00:12:10.582 fused_ordering(924) 00:12:10.582 fused_ordering(925) 00:12:10.582 fused_ordering(926) 00:12:10.582 fused_ordering(927) 00:12:10.582 fused_ordering(928) 00:12:10.582 fused_ordering(929) 00:12:10.582 fused_ordering(930) 00:12:10.582 fused_ordering(931) 00:12:10.582 fused_ordering(932) 00:12:10.582 fused_ordering(933) 00:12:10.582 fused_ordering(934) 00:12:10.582 fused_ordering(935) 00:12:10.582 fused_ordering(936) 00:12:10.582 fused_ordering(937) 00:12:10.582 fused_ordering(938) 00:12:10.582 fused_ordering(939) 00:12:10.582 fused_ordering(940) 00:12:10.582 fused_ordering(941) 00:12:10.582 fused_ordering(942) 00:12:10.582 fused_ordering(943) 00:12:10.582 fused_ordering(944) 00:12:10.582 fused_ordering(945) 00:12:10.582 fused_ordering(946) 00:12:10.582 fused_ordering(947) 00:12:10.582 fused_ordering(948) 00:12:10.582 fused_ordering(949) 00:12:10.582 fused_ordering(950) 00:12:10.582 fused_ordering(951) 00:12:10.582 fused_ordering(952) 00:12:10.582 fused_ordering(953) 00:12:10.582 fused_ordering(954) 00:12:10.582 fused_ordering(955) 00:12:10.582 fused_ordering(956) 00:12:10.582 fused_ordering(957) 00:12:10.582 fused_ordering(958) 00:12:10.582 fused_ordering(959) 00:12:10.582 fused_ordering(960) 00:12:10.582 fused_ordering(961) 00:12:10.582 fused_ordering(962) 00:12:10.582 fused_ordering(963) 00:12:10.582 fused_ordering(964) 00:12:10.582 fused_ordering(965) 00:12:10.582 fused_ordering(966) 00:12:10.582 fused_ordering(967) 00:12:10.582 fused_ordering(968) 00:12:10.582 fused_ordering(969) 00:12:10.582 fused_ordering(970) 00:12:10.582 fused_ordering(971) 00:12:10.582 fused_ordering(972) 00:12:10.582 fused_ordering(973) 00:12:10.582 fused_ordering(974) 00:12:10.582 fused_ordering(975) 00:12:10.582 fused_ordering(976) 00:12:10.582 fused_ordering(977) 00:12:10.582 fused_ordering(978) 00:12:10.582 fused_ordering(979) 00:12:10.582 fused_ordering(980) 00:12:10.582 fused_ordering(981) 00:12:10.582 fused_ordering(982) 00:12:10.582 fused_ordering(983) 00:12:10.582 fused_ordering(984) 00:12:10.582 fused_ordering(985) 00:12:10.582 fused_ordering(986) 00:12:10.582 fused_ordering(987) 00:12:10.582 fused_ordering(988) 00:12:10.582 fused_ordering(989) 00:12:10.582 fused_ordering(990) 00:12:10.582 fused_ordering(991) 00:12:10.582 fused_ordering(992) 00:12:10.582 fused_ordering(993) 00:12:10.582 fused_ordering(994) 00:12:10.582 fused_ordering(995) 00:12:10.582 fused_ordering(996) 00:12:10.582 fused_ordering(997) 00:12:10.582 fused_ordering(998) 00:12:10.582 fused_ordering(999) 00:12:10.582 fused_ordering(1000) 00:12:10.582 fused_ordering(1001) 00:12:10.582 fused_ordering(1002) 00:12:10.582 fused_ordering(1003) 00:12:10.582 fused_ordering(1004) 00:12:10.582 fused_ordering(1005) 00:12:10.582 fused_ordering(1006) 00:12:10.582 fused_ordering(1007) 00:12:10.582 fused_ordering(1008) 00:12:10.582 fused_ordering(1009) 00:12:10.582 fused_ordering(1010) 00:12:10.582 fused_ordering(1011) 00:12:10.582 fused_ordering(1012) 00:12:10.582 fused_ordering(1013) 00:12:10.582 fused_ordering(1014) 00:12:10.582 fused_ordering(1015) 00:12:10.582 fused_ordering(1016) 00:12:10.582 fused_ordering(1017) 00:12:10.582 fused_ordering(1018) 00:12:10.582 fused_ordering(1019) 00:12:10.582 fused_ordering(1020) 00:12:10.582 fused_ordering(1021) 00:12:10.582 fused_ordering(1022) 00:12:10.582 fused_ordering(1023) 00:12:10.582 15:54:50 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:10.582 15:54:50 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:10.582 15:54:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:10.582 15:54:50 -- nvmf/common.sh@117 -- # sync 00:12:10.582 15:54:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.582 15:54:50 -- nvmf/common.sh@120 -- # set +e 00:12:10.582 15:54:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.582 15:54:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.582 rmmod nvme_tcp 00:12:10.582 rmmod nvme_fabrics 00:12:10.582 rmmod nvme_keyring 00:12:10.582 15:54:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.582 15:54:50 -- nvmf/common.sh@124 -- # set -e 00:12:10.582 15:54:50 -- nvmf/common.sh@125 -- # return 0 00:12:10.582 15:54:50 -- nvmf/common.sh@478 -- # '[' -n 2366279 ']' 00:12:10.582 15:54:50 -- nvmf/common.sh@479 -- # killprocess 2366279 00:12:10.582 15:54:50 -- common/autotest_common.sh@936 -- # '[' -z 2366279 ']' 00:12:10.582 15:54:50 -- common/autotest_common.sh@940 -- # kill -0 2366279 00:12:10.582 15:54:50 -- common/autotest_common.sh@941 -- # uname 00:12:10.582 15:54:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.582 15:54:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2366279 00:12:10.583 15:54:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:10.583 15:54:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:10.583 15:54:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2366279' 00:12:10.583 killing process with pid 2366279 00:12:10.583 15:54:50 -- common/autotest_common.sh@955 -- # kill 2366279 00:12:10.583 15:54:50 -- common/autotest_common.sh@960 -- # wait 2366279 00:12:11.962 15:54:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:11.962 15:54:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:11.962 15:54:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:11.962 15:54:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.962 15:54:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:11.962 15:54:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.962 15:54:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.962 15:54:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.864 15:54:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:13.864 00:12:13.864 real 0m13.823s 00:12:13.864 user 0m9.484s 00:12:13.864 sys 0m6.781s 00:12:13.864 15:54:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:13.864 15:54:53 -- common/autotest_common.sh@10 -- # set +x 00:12:13.864 ************************************ 00:12:13.864 END TEST nvmf_fused_ordering 00:12:13.864 ************************************ 00:12:13.864 15:54:53 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:13.864 15:54:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:13.864 15:54:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:13.864 15:54:53 -- common/autotest_common.sh@10 -- # set +x 00:12:14.123 ************************************ 00:12:14.123 START TEST nvmf_delete_subsystem 00:12:14.123 ************************************ 00:12:14.123 15:54:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:14.123 * Looking for test storage... 00:12:14.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.123 15:54:53 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.123 15:54:53 -- nvmf/common.sh@7 -- # uname -s 00:12:14.123 15:54:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.123 15:54:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.123 15:54:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.123 15:54:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.123 15:54:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.123 15:54:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.123 15:54:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.123 15:54:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.123 15:54:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.123 15:54:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.123 15:54:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.123 15:54:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.123 15:54:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.123 15:54:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.123 15:54:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.123 15:54:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.123 15:54:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.123 15:54:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.123 15:54:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.123 15:54:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.123 15:54:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.123 15:54:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.123 15:54:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.123 15:54:53 -- paths/export.sh@5 -- # export PATH 00:12:14.123 15:54:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.123 15:54:53 -- nvmf/common.sh@47 -- # : 0 00:12:14.123 15:54:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.123 15:54:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.123 15:54:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.123 15:54:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.123 15:54:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.123 15:54:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.123 15:54:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.123 15:54:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.123 15:54:53 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:14.123 15:54:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:14.123 15:54:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.123 15:54:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:14.123 15:54:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:14.123 15:54:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:14.123 15:54:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.123 15:54:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.123 15:54:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.123 15:54:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:14.123 15:54:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:14.123 15:54:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:14.123 15:54:53 -- common/autotest_common.sh@10 -- # set +x 00:12:19.397 15:54:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:19.397 15:54:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:19.397 15:54:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:19.397 15:54:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:19.397 15:54:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:19.397 15:54:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:19.397 15:54:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:19.397 15:54:58 -- nvmf/common.sh@295 -- # net_devs=() 00:12:19.397 15:54:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:19.397 15:54:58 -- nvmf/common.sh@296 -- # e810=() 00:12:19.397 15:54:58 -- nvmf/common.sh@296 -- # local -ga e810 00:12:19.397 15:54:58 -- nvmf/common.sh@297 -- # x722=() 00:12:19.397 15:54:58 -- nvmf/common.sh@297 -- # local -ga x722 00:12:19.397 15:54:58 -- nvmf/common.sh@298 -- # mlx=() 00:12:19.397 15:54:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:19.397 15:54:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.398 15:54:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:19.398 15:54:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:19.398 15:54:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:19.398 15:54:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.398 15:54:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:19.398 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:19.398 15:54:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:19.398 15:54:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:19.398 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:19.398 15:54:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:19.398 15:54:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.398 15:54:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.398 15:54:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:19.398 15:54:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.398 15:54:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:19.398 Found net devices under 0000:86:00.0: cvl_0_0 00:12:19.398 15:54:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.398 15:54:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:19.398 15:54:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.398 15:54:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:19.398 15:54:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.398 15:54:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:19.398 Found net devices under 0000:86:00.1: cvl_0_1 00:12:19.398 15:54:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.398 15:54:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:19.398 15:54:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:19.398 15:54:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:19.398 15:54:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:19.398 15:54:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.398 15:54:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.398 15:54:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.398 15:54:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:19.398 15:54:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.398 15:54:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.398 15:54:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:19.398 15:54:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.398 15:54:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.398 15:54:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:19.398 15:54:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:19.398 15:54:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.398 15:54:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.398 15:54:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.398 15:54:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.398 15:54:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:19.398 15:54:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.398 15:54:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.398 15:54:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.398 15:54:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:19.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:12:19.398 00:12:19.398 --- 10.0.0.2 ping statistics --- 00:12:19.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.398 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:19.398 15:54:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:12:19.398 00:12:19.398 --- 10.0.0.1 ping statistics --- 00:12:19.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.398 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:12:19.398 15:54:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.398 15:54:59 -- nvmf/common.sh@411 -- # return 0 00:12:19.398 15:54:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:19.398 15:54:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.398 15:54:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:19.398 15:54:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:19.398 15:54:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.398 15:54:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:19.398 15:54:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:19.398 15:54:59 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:19.398 15:54:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:19.398 15:54:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:19.398 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:19.398 15:54:59 -- nvmf/common.sh@470 -- # nvmfpid=2370741 00:12:19.398 15:54:59 -- nvmf/common.sh@471 -- # waitforlisten 2370741 00:12:19.398 15:54:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:19.398 15:54:59 -- common/autotest_common.sh@817 -- # '[' -z 2370741 ']' 00:12:19.398 15:54:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.398 15:54:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:19.398 15:54:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.398 15:54:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:19.398 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:19.656 [2024-04-26 15:54:59.139712] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:19.657 [2024-04-26 15:54:59.139804] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.657 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.657 [2024-04-26 15:54:59.250024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:19.915 [2024-04-26 15:54:59.461975] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.915 [2024-04-26 15:54:59.462022] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.915 [2024-04-26 15:54:59.462032] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.916 [2024-04-26 15:54:59.462058] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.916 [2024-04-26 15:54:59.462074] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.916 [2024-04-26 15:54:59.462141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.916 [2024-04-26 15:54:59.462160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.483 15:54:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:20.483 15:54:59 -- common/autotest_common.sh@850 -- # return 0 00:12:20.483 15:54:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:20.483 15:54:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:20.483 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:20.483 15:54:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.483 15:54:59 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.483 15:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.483 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:20.483 [2024-04-26 15:54:59.962202] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.483 15:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.483 15:54:59 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:20.483 15:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.483 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:20.483 15:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.483 15:54:59 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.483 15:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.483 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:20.483 [2024-04-26 15:54:59.978383] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.483 15:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.483 15:54:59 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:20.483 15:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.483 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:20.483 NULL1 00:12:20.483 15:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.483 15:54:59 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:20.483 15:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.483 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:20.483 Delay0 00:12:20.483 15:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.483 15:54:59 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.483 15:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:20.483 15:54:59 -- common/autotest_common.sh@10 -- # set +x 00:12:20.483 15:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:20.483 15:55:00 -- target/delete_subsystem.sh@28 -- # perf_pid=2370896 00:12:20.483 15:55:00 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:20.483 15:55:00 -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:20.483 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.483 [2024-04-26 15:55:00.094336] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:22.389 15:55:02 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.389 15:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:22.389 15:55:02 -- common/autotest_common.sh@10 -- # set +x 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 starting I/O failed: -6 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 starting I/O failed: -6 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Write completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 starting I/O failed: -6 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 starting I/O failed: -6 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Write completed with error (sct=0, sc=8) 00:12:22.648 Write completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 starting I/O failed: -6 00:12:22.648 Write completed with error (sct=0, sc=8) 00:12:22.648 Write completed with error (sct=0, sc=8) 00:12:22.648 Write completed with error (sct=0, sc=8) 00:12:22.648 Write completed with error (sct=0, sc=8) 00:12:22.648 starting I/O failed: -6 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 Write completed with error (sct=0, sc=8) 00:12:22.648 Read completed with error (sct=0, sc=8) 00:12:22.648 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 [2024-04-26 15:55:02.202705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010040 is same with the state(5) to be set 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 [2024-04-26 15:55:02.203619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010640 is same with the state(5) to be set 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 [2024-04-26 15:55:02.204144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010240 is same with the state(5) to be set 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Read completed with error (sct=0, sc=8) 00:12:22.649 Write completed with error (sct=0, sc=8) 00:12:22.649 starting I/O failed: -6 00:12:22.649 [2024-04-26 15:55:02.204930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002840 is same with the state(5) to be set 00:12:23.586 [2024-04-26 15:55:03.154519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 [2024-04-26 15:55:03.207188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010440 is same with the state(5) to be set 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 [2024-04-26 15:55:03.208577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002440 is same with the state(5) to be set 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.586 Write completed with error (sct=0, sc=8) 00:12:23.586 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 [2024-04-26 15:55:03.208833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002640 is same with the state(5) to be set 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Write completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 Read completed with error (sct=0, sc=8) 00:12:23.587 [2024-04-26 15:55:03.209321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002a40 is same with the state(5) to be set 00:12:23.587 [2024-04-26 15:55:03.214457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:12:23.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:23.587 15:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.587 15:55:03 -- target/delete_subsystem.sh@34 -- # delay=0 00:12:23.587 15:55:03 -- target/delete_subsystem.sh@35 -- # kill -0 2370896 00:12:23.587 15:55:03 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:23.587 Initializing NVMe Controllers 00:12:23.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:23.587 Controller IO queue size 128, less than required. 00:12:23.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:23.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:23.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:23.587 Initialization complete. Launching workers. 00:12:23.587 ======================================================== 00:12:23.587 Latency(us) 00:12:23.587 Device Information : IOPS MiB/s Average min max 00:12:23.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.87 0.09 950975.75 1067.37 1014766.14 00:12:23.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.71 0.07 900713.28 948.49 1012400.66 00:12:23.587 ======================================================== 00:12:23.587 Total : 338.58 0.17 928751.35 948.49 1014766.14 00:12:23.587 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@35 -- # kill -0 2370896 00:12:24.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2370896) - No such process 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@45 -- # NOT wait 2370896 00:12:24.154 15:55:03 -- common/autotest_common.sh@638 -- # local es=0 00:12:24.154 15:55:03 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2370896 00:12:24.154 15:55:03 -- common/autotest_common.sh@626 -- # local arg=wait 00:12:24.154 15:55:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:24.154 15:55:03 -- common/autotest_common.sh@630 -- # type -t wait 00:12:24.154 15:55:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:24.154 15:55:03 -- common/autotest_common.sh@641 -- # wait 2370896 00:12:24.154 15:55:03 -- common/autotest_common.sh@641 -- # es=1 00:12:24.154 15:55:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:24.154 15:55:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:24.154 15:55:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:24.154 15:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.154 15:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:24.154 15:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.154 15:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.154 15:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:24.154 [2024-04-26 15:55:03.743634] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.154 15:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.154 15:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.154 15:55:03 -- common/autotest_common.sh@10 -- # set +x 00:12:24.154 15:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@54 -- # perf_pid=2371464 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@56 -- # delay=0 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@57 -- # kill -0 2371464 00:12:24.154 15:55:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:24.154 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.413 [2024-04-26 15:55:03.844709] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:24.672 15:55:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:24.672 15:55:04 -- target/delete_subsystem.sh@57 -- # kill -0 2371464 00:12:24.672 15:55:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.239 15:55:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.239 15:55:04 -- target/delete_subsystem.sh@57 -- # kill -0 2371464 00:12:25.239 15:55:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.806 15:55:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.806 15:55:05 -- target/delete_subsystem.sh@57 -- # kill -0 2371464 00:12:25.806 15:55:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.374 15:55:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.374 15:55:05 -- target/delete_subsystem.sh@57 -- # kill -0 2371464 00:12:26.374 15:55:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.633 15:55:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.633 15:55:06 -- target/delete_subsystem.sh@57 -- # kill -0 2371464 00:12:26.633 15:55:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.202 15:55:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.202 15:55:06 -- target/delete_subsystem.sh@57 -- # kill -0 2371464 00:12:27.202 15:55:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.771 Initializing NVMe Controllers 00:12:27.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.771 Controller IO queue size 128, less than required. 00:12:27.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:27.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:27.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:27.771 Initialization complete. Launching workers. 00:12:27.771 ======================================================== 00:12:27.771 Latency(us) 00:12:27.771 Device Information : IOPS MiB/s Average min max 00:12:27.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005558.21 1000382.89 1011826.59 00:12:27.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005028.78 1000366.12 1041967.46 00:12:27.771 ======================================================== 00:12:27.771 Total : 256.00 0.12 1005293.49 1000366.12 1041967.46 00:12:27.771 00:12:27.771 15:55:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.771 15:55:07 -- target/delete_subsystem.sh@57 -- # kill -0 2371464 00:12:27.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2371464) - No such process 00:12:27.771 15:55:07 -- target/delete_subsystem.sh@67 -- # wait 2371464 00:12:27.771 15:55:07 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:27.771 15:55:07 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:27.771 15:55:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:27.771 15:55:07 -- nvmf/common.sh@117 -- # sync 00:12:27.771 15:55:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.771 15:55:07 -- nvmf/common.sh@120 -- # set +e 00:12:27.771 15:55:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.771 15:55:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.771 rmmod nvme_tcp 00:12:27.771 rmmod nvme_fabrics 00:12:27.771 rmmod nvme_keyring 00:12:27.771 15:55:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.771 15:55:07 -- nvmf/common.sh@124 -- # set -e 00:12:27.771 15:55:07 -- nvmf/common.sh@125 -- # return 0 00:12:27.771 15:55:07 -- nvmf/common.sh@478 -- # '[' -n 2370741 ']' 00:12:27.771 15:55:07 -- nvmf/common.sh@479 -- # killprocess 2370741 00:12:27.771 15:55:07 -- common/autotest_common.sh@936 -- # '[' -z 2370741 ']' 00:12:27.771 15:55:07 -- common/autotest_common.sh@940 -- # kill -0 2370741 00:12:27.771 15:55:07 -- common/autotest_common.sh@941 -- # uname 00:12:27.771 15:55:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:27.771 15:55:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2370741 00:12:27.771 15:55:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:27.771 15:55:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:27.771 15:55:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2370741' 00:12:27.771 killing process with pid 2370741 00:12:27.771 15:55:07 -- common/autotest_common.sh@955 -- # kill 2370741 00:12:27.771 15:55:07 -- common/autotest_common.sh@960 -- # wait 2370741 00:12:29.150 15:55:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:29.150 15:55:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:29.150 15:55:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:29.150 15:55:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.150 15:55:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:29.150 15:55:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.150 15:55:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.150 15:55:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.056 15:55:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:31.056 00:12:31.056 real 0m17.065s 00:12:31.056 user 0m31.838s 00:12:31.056 sys 0m5.050s 00:12:31.056 15:55:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.056 15:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:31.056 ************************************ 00:12:31.056 END TEST nvmf_delete_subsystem 00:12:31.056 ************************************ 00:12:31.316 15:55:10 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:31.316 15:55:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.316 15:55:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.316 15:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:31.316 ************************************ 00:12:31.316 START TEST nvmf_ns_masking 00:12:31.316 ************************************ 00:12:31.316 15:55:10 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:31.316 * Looking for test storage... 00:12:31.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.316 15:55:10 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.316 15:55:10 -- nvmf/common.sh@7 -- # uname -s 00:12:31.316 15:55:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.316 15:55:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.316 15:55:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.316 15:55:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.316 15:55:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.316 15:55:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.316 15:55:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.316 15:55:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.316 15:55:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.316 15:55:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.316 15:55:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.316 15:55:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.316 15:55:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.316 15:55:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.316 15:55:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.316 15:55:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.316 15:55:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.316 15:55:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.316 15:55:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.316 15:55:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.316 15:55:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.316 15:55:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.316 15:55:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.317 15:55:10 -- paths/export.sh@5 -- # export PATH 00:12:31.317 15:55:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.317 15:55:10 -- nvmf/common.sh@47 -- # : 0 00:12:31.317 15:55:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.317 15:55:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.317 15:55:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.317 15:55:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.317 15:55:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.317 15:55:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.317 15:55:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.317 15:55:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.575 15:55:11 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:31.575 15:55:11 -- target/ns_masking.sh@11 -- # loops=5 00:12:31.575 15:55:11 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:31.575 15:55:11 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:12:31.575 15:55:11 -- target/ns_masking.sh@15 -- # uuidgen 00:12:31.575 15:55:11 -- target/ns_masking.sh@15 -- # HOSTID=aca707f7-458f-4ee9-8ee7-a400ee10b1cf 00:12:31.575 15:55:11 -- target/ns_masking.sh@44 -- # nvmftestinit 00:12:31.575 15:55:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:31.575 15:55:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.575 15:55:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:31.575 15:55:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:31.575 15:55:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:31.575 15:55:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.575 15:55:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.575 15:55:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.575 15:55:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:31.575 15:55:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:31.575 15:55:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.575 15:55:11 -- common/autotest_common.sh@10 -- # set +x 00:12:36.848 15:55:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:36.848 15:55:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.848 15:55:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.848 15:55:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.848 15:55:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.848 15:55:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.848 15:55:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.848 15:55:16 -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.848 15:55:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.848 15:55:16 -- nvmf/common.sh@296 -- # e810=() 00:12:36.848 15:55:16 -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.848 15:55:16 -- nvmf/common.sh@297 -- # x722=() 00:12:36.848 15:55:16 -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.848 15:55:16 -- nvmf/common.sh@298 -- # mlx=() 00:12:36.848 15:55:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.848 15:55:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.848 15:55:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.848 15:55:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.848 15:55:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.848 15:55:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.848 15:55:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:36.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:36.848 15:55:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.848 15:55:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:36.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:36.848 15:55:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.848 15:55:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.848 15:55:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.848 15:55:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:36.848 15:55:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.848 15:55:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:36.848 Found net devices under 0000:86:00.0: cvl_0_0 00:12:36.848 15:55:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.848 15:55:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.848 15:55:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.848 15:55:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:36.848 15:55:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.848 15:55:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:36.848 Found net devices under 0000:86:00.1: cvl_0_1 00:12:36.848 15:55:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.848 15:55:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:36.848 15:55:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:36.848 15:55:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:36.848 15:55:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.848 15:55:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.848 15:55:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.848 15:55:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.848 15:55:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.848 15:55:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.848 15:55:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.848 15:55:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.848 15:55:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.848 15:55:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.848 15:55:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.848 15:55:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.848 15:55:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.848 15:55:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.848 15:55:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.848 15:55:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.848 15:55:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.848 15:55:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.848 15:55:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.848 15:55:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:12:36.848 00:12:36.848 --- 10.0.0.2 ping statistics --- 00:12:36.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.848 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:36.848 15:55:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:12:36.848 00:12:36.848 --- 10.0.0.1 ping statistics --- 00:12:36.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.848 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:36.848 15:55:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.848 15:55:16 -- nvmf/common.sh@411 -- # return 0 00:12:36.848 15:55:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:36.848 15:55:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.848 15:55:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:36.848 15:55:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.848 15:55:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:36.848 15:55:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:36.848 15:55:16 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:36.848 15:55:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:36.848 15:55:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:36.848 15:55:16 -- common/autotest_common.sh@10 -- # set +x 00:12:36.848 15:55:16 -- nvmf/common.sh@470 -- # nvmfpid=2375694 00:12:36.848 15:55:16 -- nvmf/common.sh@471 -- # waitforlisten 2375694 00:12:36.848 15:55:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.848 15:55:16 -- common/autotest_common.sh@817 -- # '[' -z 2375694 ']' 00:12:36.848 15:55:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.848 15:55:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:36.848 15:55:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.849 15:55:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:36.849 15:55:16 -- common/autotest_common.sh@10 -- # set +x 00:12:37.109 [2024-04-26 15:55:16.559166] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:37.109 [2024-04-26 15:55:16.559254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.109 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.109 [2024-04-26 15:55:16.669306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.368 [2024-04-26 15:55:16.898895] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.368 [2024-04-26 15:55:16.898939] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.368 [2024-04-26 15:55:16.898950] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.368 [2024-04-26 15:55:16.898961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.368 [2024-04-26 15:55:16.898969] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.368 [2024-04-26 15:55:16.899039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.368 [2024-04-26 15:55:16.899117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.368 [2024-04-26 15:55:16.899131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.368 [2024-04-26 15:55:16.899141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.938 15:55:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:37.938 15:55:17 -- common/autotest_common.sh@850 -- # return 0 00:12:37.938 15:55:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:37.938 15:55:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:37.938 15:55:17 -- common/autotest_common.sh@10 -- # set +x 00:12:37.938 15:55:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.938 15:55:17 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:37.938 [2024-04-26 15:55:17.526405] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.938 15:55:17 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:37.938 15:55:17 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:37.938 15:55:17 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:38.198 Malloc1 00:12:38.198 15:55:17 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:38.458 Malloc2 00:12:38.458 15:55:18 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:38.717 15:55:18 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:38.976 15:55:18 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.976 [2024-04-26 15:55:18.604871] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.976 15:55:18 -- target/ns_masking.sh@61 -- # connect 00:12:38.976 15:55:18 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aca707f7-458f-4ee9-8ee7-a400ee10b1cf -a 10.0.0.2 -s 4420 -i 4 00:12:39.235 15:55:18 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.235 15:55:18 -- common/autotest_common.sh@1184 -- # local i=0 00:12:39.235 15:55:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.235 15:55:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:39.235 15:55:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:41.139 15:55:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:41.139 15:55:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:41.139 15:55:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.139 15:55:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:41.139 15:55:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.139 15:55:20 -- common/autotest_common.sh@1194 -- # return 0 00:12:41.139 15:55:20 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:41.139 15:55:20 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:41.398 15:55:20 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:41.398 15:55:20 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:41.398 15:55:20 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:41.398 15:55:20 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:41.398 15:55:20 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:41.398 [ 0]:0x1 00:12:41.398 15:55:20 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:41.398 15:55:20 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:41.398 15:55:20 -- target/ns_masking.sh@40 -- # nguid=fc92d0132280466e8b29f2e97890cf8b 00:12:41.398 15:55:20 -- target/ns_masking.sh@41 -- # [[ fc92d0132280466e8b29f2e97890cf8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.398 15:55:20 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:41.658 15:55:21 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:41.658 15:55:21 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:41.658 15:55:21 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:41.658 [ 0]:0x1 00:12:41.658 15:55:21 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:41.658 15:55:21 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:41.658 15:55:21 -- target/ns_masking.sh@40 -- # nguid=fc92d0132280466e8b29f2e97890cf8b 00:12:41.658 15:55:21 -- target/ns_masking.sh@41 -- # [[ fc92d0132280466e8b29f2e97890cf8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.658 15:55:21 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:41.658 15:55:21 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:41.658 15:55:21 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:41.658 [ 1]:0x2 00:12:41.658 15:55:21 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:41.658 15:55:21 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:41.658 15:55:21 -- target/ns_masking.sh@40 -- # nguid=b660e82284844a5bb77f92f31b006779 00:12:41.658 15:55:21 -- target/ns_masking.sh@41 -- # [[ b660e82284844a5bb77f92f31b006779 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.658 15:55:21 -- target/ns_masking.sh@69 -- # disconnect 00:12:41.658 15:55:21 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.658 15:55:21 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.917 15:55:21 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:42.174 15:55:21 -- target/ns_masking.sh@77 -- # connect 1 00:12:42.174 15:55:21 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aca707f7-458f-4ee9-8ee7-a400ee10b1cf -a 10.0.0.2 -s 4420 -i 4 00:12:42.432 15:55:21 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:42.432 15:55:21 -- common/autotest_common.sh@1184 -- # local i=0 00:12:42.432 15:55:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.432 15:55:21 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:12:42.432 15:55:21 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:12:42.432 15:55:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:44.390 15:55:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:44.390 15:55:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:44.390 15:55:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.390 15:55:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:44.390 15:55:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.390 15:55:23 -- common/autotest_common.sh@1194 -- # return 0 00:12:44.390 15:55:23 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:44.390 15:55:23 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:44.390 15:55:23 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:44.390 15:55:23 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:44.390 15:55:23 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:44.390 15:55:23 -- common/autotest_common.sh@638 -- # local es=0 00:12:44.390 15:55:23 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:44.390 15:55:23 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:44.390 15:55:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.390 15:55:23 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:44.390 15:55:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.390 15:55:23 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:44.390 15:55:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:44.390 15:55:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:44.390 15:55:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.390 15:55:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:44.684 15:55:24 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:44.684 15:55:24 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.684 15:55:24 -- common/autotest_common.sh@641 -- # es=1 00:12:44.684 15:55:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:44.684 15:55:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:44.684 15:55:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:44.684 15:55:24 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:44.684 15:55:24 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:44.684 15:55:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:44.684 [ 0]:0x2 00:12:44.684 15:55:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.684 15:55:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:44.684 15:55:24 -- target/ns_masking.sh@40 -- # nguid=b660e82284844a5bb77f92f31b006779 00:12:44.684 15:55:24 -- target/ns_masking.sh@41 -- # [[ b660e82284844a5bb77f92f31b006779 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.684 15:55:24 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.684 15:55:24 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:44.684 15:55:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:44.684 15:55:24 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:44.684 [ 0]:0x1 00:12:44.684 15:55:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.684 15:55:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:44.943 15:55:24 -- target/ns_masking.sh@40 -- # nguid=fc92d0132280466e8b29f2e97890cf8b 00:12:44.943 15:55:24 -- target/ns_masking.sh@41 -- # [[ fc92d0132280466e8b29f2e97890cf8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.943 15:55:24 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:44.943 15:55:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:44.943 15:55:24 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:44.943 [ 1]:0x2 00:12:44.943 15:55:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.943 15:55:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:44.943 15:55:24 -- target/ns_masking.sh@40 -- # nguid=b660e82284844a5bb77f92f31b006779 00:12:44.943 15:55:24 -- target/ns_masking.sh@41 -- # [[ b660e82284844a5bb77f92f31b006779 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.943 15:55:24 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.943 15:55:24 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:44.943 15:55:24 -- common/autotest_common.sh@638 -- # local es=0 00:12:44.943 15:55:24 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:44.943 15:55:24 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:44.943 15:55:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.943 15:55:24 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:44.943 15:55:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:44.943 15:55:24 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:44.943 15:55:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:44.943 15:55:24 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:44.943 15:55:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.943 15:55:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:45.202 15:55:24 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:45.202 15:55:24 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.202 15:55:24 -- common/autotest_common.sh@641 -- # es=1 00:12:45.202 15:55:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:45.202 15:55:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:45.202 15:55:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:45.202 15:55:24 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:45.202 15:55:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:45.202 15:55:24 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:45.202 [ 0]:0x2 00:12:45.202 15:55:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:45.202 15:55:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:45.202 15:55:24 -- target/ns_masking.sh@40 -- # nguid=b660e82284844a5bb77f92f31b006779 00:12:45.202 15:55:24 -- target/ns_masking.sh@41 -- # [[ b660e82284844a5bb77f92f31b006779 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:45.202 15:55:24 -- target/ns_masking.sh@91 -- # disconnect 00:12:45.202 15:55:24 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:45.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.202 15:55:24 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:45.461 15:55:24 -- target/ns_masking.sh@95 -- # connect 2 00:12:45.461 15:55:24 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I aca707f7-458f-4ee9-8ee7-a400ee10b1cf -a 10.0.0.2 -s 4420 -i 4 00:12:45.461 15:55:25 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:45.461 15:55:25 -- common/autotest_common.sh@1184 -- # local i=0 00:12:45.461 15:55:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.461 15:55:25 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:45.461 15:55:25 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:45.461 15:55:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:47.996 15:55:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:47.996 15:55:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:47.996 15:55:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.996 15:55:27 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:47.996 15:55:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.996 15:55:27 -- common/autotest_common.sh@1194 -- # return 0 00:12:47.996 15:55:27 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:47.996 15:55:27 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:47.996 15:55:27 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:47.996 15:55:27 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:47.996 15:55:27 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:47.996 15:55:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:47.996 15:55:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:47.996 [ 0]:0x1 00:12:47.996 15:55:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.996 15:55:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:47.996 15:55:27 -- target/ns_masking.sh@40 -- # nguid=fc92d0132280466e8b29f2e97890cf8b 00:12:47.996 15:55:27 -- target/ns_masking.sh@41 -- # [[ fc92d0132280466e8b29f2e97890cf8b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.996 15:55:27 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:47.996 15:55:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:47.996 15:55:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:47.996 [ 1]:0x2 00:12:47.996 15:55:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:47.996 15:55:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.996 15:55:27 -- target/ns_masking.sh@40 -- # nguid=b660e82284844a5bb77f92f31b006779 00:12:47.996 15:55:27 -- target/ns_masking.sh@41 -- # [[ b660e82284844a5bb77f92f31b006779 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.996 15:55:27 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.996 15:55:27 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:47.996 15:55:27 -- common/autotest_common.sh@638 -- # local es=0 00:12:47.996 15:55:27 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:47.996 15:55:27 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:47.996 15:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:47.996 15:55:27 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:47.996 15:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:47.996 15:55:27 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:47.996 15:55:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:47.996 15:55:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:47.996 15:55:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.996 15:55:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:48.255 15:55:27 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:48.255 15:55:27 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.255 15:55:27 -- common/autotest_common.sh@641 -- # es=1 00:12:48.255 15:55:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:48.255 15:55:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:48.256 15:55:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:48.256 15:55:27 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:48.256 15:55:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:48.256 15:55:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:48.256 [ 0]:0x2 00:12:48.256 15:55:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.256 15:55:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:48.256 15:55:27 -- target/ns_masking.sh@40 -- # nguid=b660e82284844a5bb77f92f31b006779 00:12:48.256 15:55:27 -- target/ns_masking.sh@41 -- # [[ b660e82284844a5bb77f92f31b006779 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.256 15:55:27 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.256 15:55:27 -- common/autotest_common.sh@638 -- # local es=0 00:12:48.256 15:55:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.256 15:55:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.256 15:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:48.256 15:55:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.256 15:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:48.256 15:55:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.256 15:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:48.256 15:55:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:48.256 15:55:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:48.256 15:55:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:48.256 [2024-04-26 15:55:27.930007] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:48.256 request: 00:12:48.256 { 00:12:48.256 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.256 "nsid": 2, 00:12:48.256 "host": "nqn.2016-06.io.spdk:host1", 00:12:48.256 "method": "nvmf_ns_remove_host", 00:12:48.256 "req_id": 1 00:12:48.256 } 00:12:48.256 Got JSON-RPC error response 00:12:48.256 response: 00:12:48.256 { 00:12:48.256 "code": -32602, 00:12:48.256 "message": "Invalid parameters" 00:12:48.256 } 00:12:48.515 15:55:27 -- common/autotest_common.sh@641 -- # es=1 00:12:48.515 15:55:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:48.515 15:55:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:48.515 15:55:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:48.515 15:55:27 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:48.515 15:55:27 -- common/autotest_common.sh@638 -- # local es=0 00:12:48.515 15:55:27 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:48.515 15:55:27 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:48.515 15:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:48.515 15:55:27 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:48.515 15:55:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:48.515 15:55:27 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:48.515 15:55:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:48.515 15:55:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:48.515 15:55:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.515 15:55:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:48.515 15:55:28 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:48.515 15:55:28 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.515 15:55:28 -- common/autotest_common.sh@641 -- # es=1 00:12:48.515 15:55:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:48.515 15:55:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:48.515 15:55:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:48.515 15:55:28 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:48.515 15:55:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:48.515 15:55:28 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:48.515 [ 0]:0x2 00:12:48.515 15:55:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.515 15:55:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:48.515 15:55:28 -- target/ns_masking.sh@40 -- # nguid=b660e82284844a5bb77f92f31b006779 00:12:48.515 15:55:28 -- target/ns_masking.sh@41 -- # [[ b660e82284844a5bb77f92f31b006779 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.515 15:55:28 -- target/ns_masking.sh@108 -- # disconnect 00:12:48.515 15:55:28 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.516 15:55:28 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.775 15:55:28 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:48.775 15:55:28 -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:48.775 15:55:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:48.775 15:55:28 -- nvmf/common.sh@117 -- # sync 00:12:48.775 15:55:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.775 15:55:28 -- nvmf/common.sh@120 -- # set +e 00:12:48.775 15:55:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.775 15:55:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.775 rmmod nvme_tcp 00:12:48.775 rmmod nvme_fabrics 00:12:48.775 rmmod nvme_keyring 00:12:48.775 15:55:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.775 15:55:28 -- nvmf/common.sh@124 -- # set -e 00:12:48.775 15:55:28 -- nvmf/common.sh@125 -- # return 0 00:12:48.775 15:55:28 -- nvmf/common.sh@478 -- # '[' -n 2375694 ']' 00:12:48.775 15:55:28 -- nvmf/common.sh@479 -- # killprocess 2375694 00:12:48.775 15:55:28 -- common/autotest_common.sh@936 -- # '[' -z 2375694 ']' 00:12:48.775 15:55:28 -- common/autotest_common.sh@940 -- # kill -0 2375694 00:12:48.775 15:55:28 -- common/autotest_common.sh@941 -- # uname 00:12:48.775 15:55:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.775 15:55:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2375694 00:12:48.775 15:55:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:48.775 15:55:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:48.775 15:55:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2375694' 00:12:48.775 killing process with pid 2375694 00:12:48.775 15:55:28 -- common/autotest_common.sh@955 -- # kill 2375694 00:12:48.775 15:55:28 -- common/autotest_common.sh@960 -- # wait 2375694 00:12:50.682 15:55:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:50.682 15:55:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:50.682 15:55:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:50.682 15:55:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.682 15:55:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.682 15:55:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.682 15:55:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.682 15:55:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.589 15:55:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.589 00:12:52.589 real 0m21.202s 00:12:52.589 user 0m54.993s 00:12:52.589 sys 0m5.865s 00:12:52.589 15:55:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.589 15:55:32 -- common/autotest_common.sh@10 -- # set +x 00:12:52.589 ************************************ 00:12:52.589 END TEST nvmf_ns_masking 00:12:52.589 ************************************ 00:12:52.589 15:55:32 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:52.589 15:55:32 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:52.589 15:55:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.589 15:55:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.589 15:55:32 -- common/autotest_common.sh@10 -- # set +x 00:12:52.589 ************************************ 00:12:52.589 START TEST nvmf_nvme_cli 00:12:52.589 ************************************ 00:12:52.589 15:55:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:52.849 * Looking for test storage... 00:12:52.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.849 15:55:32 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.849 15:55:32 -- nvmf/common.sh@7 -- # uname -s 00:12:52.849 15:55:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.849 15:55:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.849 15:55:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.849 15:55:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.849 15:55:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.849 15:55:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.849 15:55:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.849 15:55:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.849 15:55:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.849 15:55:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.849 15:55:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:52.849 15:55:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:52.849 15:55:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.849 15:55:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.849 15:55:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.849 15:55:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.849 15:55:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.849 15:55:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.849 15:55:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.849 15:55:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.849 15:55:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.849 15:55:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.849 15:55:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.849 15:55:32 -- paths/export.sh@5 -- # export PATH 00:12:52.849 15:55:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.849 15:55:32 -- nvmf/common.sh@47 -- # : 0 00:12:52.849 15:55:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.849 15:55:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.849 15:55:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.850 15:55:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.850 15:55:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.850 15:55:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.850 15:55:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.850 15:55:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.850 15:55:32 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.850 15:55:32 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:52.850 15:55:32 -- target/nvme_cli.sh@14 -- # devs=() 00:12:52.850 15:55:32 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:52.850 15:55:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:52.850 15:55:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.850 15:55:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:52.850 15:55:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:52.850 15:55:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:52.850 15:55:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.850 15:55:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.850 15:55:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.850 15:55:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:52.850 15:55:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:52.850 15:55:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.850 15:55:32 -- common/autotest_common.sh@10 -- # set +x 00:12:58.181 15:55:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:58.181 15:55:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:58.181 15:55:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:58.181 15:55:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:58.181 15:55:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:58.181 15:55:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:58.181 15:55:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:58.181 15:55:37 -- nvmf/common.sh@295 -- # net_devs=() 00:12:58.181 15:55:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:58.181 15:55:37 -- nvmf/common.sh@296 -- # e810=() 00:12:58.181 15:55:37 -- nvmf/common.sh@296 -- # local -ga e810 00:12:58.181 15:55:37 -- nvmf/common.sh@297 -- # x722=() 00:12:58.181 15:55:37 -- nvmf/common.sh@297 -- # local -ga x722 00:12:58.181 15:55:37 -- nvmf/common.sh@298 -- # mlx=() 00:12:58.181 15:55:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:58.181 15:55:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.181 15:55:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:58.181 15:55:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:58.181 15:55:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:58.181 15:55:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:58.181 15:55:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:58.181 15:55:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:58.181 15:55:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.181 15:55:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:58.181 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:58.181 15:55:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:58.181 15:55:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:58.181 15:55:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.181 15:55:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.181 15:55:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:58.181 15:55:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:58.181 15:55:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:58.182 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:58.182 15:55:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:58.182 15:55:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.182 15:55:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.182 15:55:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:58.182 15:55:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.182 15:55:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:58.182 Found net devices under 0000:86:00.0: cvl_0_0 00:12:58.182 15:55:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.182 15:55:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:58.182 15:55:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.182 15:55:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:58.182 15:55:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.182 15:55:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:58.182 Found net devices under 0000:86:00.1: cvl_0_1 00:12:58.182 15:55:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.182 15:55:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:58.182 15:55:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:58.182 15:55:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:58.182 15:55:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.182 15:55:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.182 15:55:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.182 15:55:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:58.182 15:55:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.182 15:55:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.182 15:55:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:58.182 15:55:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.182 15:55:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.182 15:55:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:58.182 15:55:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:58.182 15:55:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.182 15:55:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.182 15:55:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.182 15:55:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.182 15:55:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:58.182 15:55:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.182 15:55:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.182 15:55:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.182 15:55:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:58.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:58.182 00:12:58.182 --- 10.0.0.2 ping statistics --- 00:12:58.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.182 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:58.182 15:55:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:58.182 00:12:58.182 --- 10.0.0.1 ping statistics --- 00:12:58.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.182 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:58.182 15:55:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.182 15:55:37 -- nvmf/common.sh@411 -- # return 0 00:12:58.182 15:55:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:58.182 15:55:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.182 15:55:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:58.182 15:55:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.182 15:55:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:58.182 15:55:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:58.182 15:55:37 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:58.182 15:55:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:58.182 15:55:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:58.182 15:55:37 -- common/autotest_common.sh@10 -- # set +x 00:12:58.182 15:55:37 -- nvmf/common.sh@470 -- # nvmfpid=2381643 00:12:58.182 15:55:37 -- nvmf/common.sh@471 -- # waitforlisten 2381643 00:12:58.182 15:55:37 -- common/autotest_common.sh@817 -- # '[' -z 2381643 ']' 00:12:58.182 15:55:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.182 15:55:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:58.182 15:55:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.182 15:55:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.182 15:55:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:58.182 15:55:37 -- common/autotest_common.sh@10 -- # set +x 00:12:58.442 [2024-04-26 15:55:37.875718] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:58.442 [2024-04-26 15:55:37.875802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.442 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.442 [2024-04-26 15:55:37.985532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.702 [2024-04-26 15:55:38.204087] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.702 [2024-04-26 15:55:38.204129] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.702 [2024-04-26 15:55:38.204139] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.702 [2024-04-26 15:55:38.204149] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.702 [2024-04-26 15:55:38.204156] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.702 [2024-04-26 15:55:38.204234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.702 [2024-04-26 15:55:38.204330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.702 [2024-04-26 15:55:38.204364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.702 [2024-04-26 15:55:38.204374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.271 15:55:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:59.271 15:55:38 -- common/autotest_common.sh@850 -- # return 0 00:12:59.271 15:55:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:59.271 15:55:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 15:55:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.271 15:55:38 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.271 15:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 [2024-04-26 15:55:38.690446] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.271 15:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.271 15:55:38 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:59.271 15:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 Malloc0 00:12:59.271 15:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.271 15:55:38 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:59.271 15:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 Malloc1 00:12:59.271 15:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.271 15:55:38 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:59.271 15:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 15:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.271 15:55:38 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.271 15:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 15:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.271 15:55:38 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:59.271 15:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 15:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.271 15:55:38 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.271 15:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 [2024-04-26 15:55:38.917030] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.271 15:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.271 15:55:38 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:59.271 15:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.271 15:55:38 -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 15:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.271 15:55:38 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:59.531 00:12:59.531 Discovery Log Number of Records 2, Generation counter 2 00:12:59.531 =====Discovery Log Entry 0====== 00:12:59.531 trtype: tcp 00:12:59.531 adrfam: ipv4 00:12:59.531 subtype: current discovery subsystem 00:12:59.531 treq: not required 00:12:59.531 portid: 0 00:12:59.531 trsvcid: 4420 00:12:59.531 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:59.531 traddr: 10.0.0.2 00:12:59.531 eflags: explicit discovery connections, duplicate discovery information 00:12:59.531 sectype: none 00:12:59.531 =====Discovery Log Entry 1====== 00:12:59.531 trtype: tcp 00:12:59.531 adrfam: ipv4 00:12:59.531 subtype: nvme subsystem 00:12:59.531 treq: not required 00:12:59.531 portid: 0 00:12:59.531 trsvcid: 4420 00:12:59.531 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:59.531 traddr: 10.0.0.2 00:12:59.531 eflags: none 00:12:59.531 sectype: none 00:12:59.531 15:55:39 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:59.531 15:55:39 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:59.531 15:55:39 -- nvmf/common.sh@511 -- # local dev _ 00:12:59.531 15:55:39 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:59.531 15:55:39 -- nvmf/common.sh@510 -- # nvme list 00:12:59.531 15:55:39 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:59.531 15:55:39 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:59.531 15:55:39 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:59.531 15:55:39 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:59.531 15:55:39 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:59.531 15:55:39 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.910 15:55:40 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:00.910 15:55:40 -- common/autotest_common.sh@1184 -- # local i=0 00:13:00.910 15:55:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.910 15:55:40 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:13:00.910 15:55:40 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:13:00.910 15:55:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:02.814 15:55:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:02.814 15:55:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:02.814 15:55:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.814 15:55:42 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:13:02.814 15:55:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.814 15:55:42 -- common/autotest_common.sh@1194 -- # return 0 00:13:02.814 15:55:42 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:02.814 15:55:42 -- nvmf/common.sh@511 -- # local dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@510 -- # nvme list 00:13:02.814 15:55:42 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:02.814 15:55:42 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:02.814 15:55:42 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:02.814 /dev/nvme0n1 ]] 00:13:02.814 15:55:42 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:02.814 15:55:42 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:02.814 15:55:42 -- nvmf/common.sh@511 -- # local dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@510 -- # nvme list 00:13:02.814 15:55:42 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:02.814 15:55:42 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:02.814 15:55:42 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:13:02.814 15:55:42 -- nvmf/common.sh@513 -- # read -r dev _ 00:13:02.814 15:55:42 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:02.814 15:55:42 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.073 15:55:42 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.073 15:55:42 -- common/autotest_common.sh@1205 -- # local i=0 00:13:03.073 15:55:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:03.073 15:55:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.073 15:55:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:03.073 15:55:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.073 15:55:42 -- common/autotest_common.sh@1217 -- # return 0 00:13:03.073 15:55:42 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:03.073 15:55:42 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.073 15:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:03.073 15:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:03.073 15:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:03.073 15:55:42 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:03.073 15:55:42 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:03.073 15:55:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:03.073 15:55:42 -- nvmf/common.sh@117 -- # sync 00:13:03.073 15:55:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.073 15:55:42 -- nvmf/common.sh@120 -- # set +e 00:13:03.073 15:55:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.073 15:55:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.073 rmmod nvme_tcp 00:13:03.073 rmmod nvme_fabrics 00:13:03.332 rmmod nvme_keyring 00:13:03.332 15:55:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.332 15:55:42 -- nvmf/common.sh@124 -- # set -e 00:13:03.333 15:55:42 -- nvmf/common.sh@125 -- # return 0 00:13:03.333 15:55:42 -- nvmf/common.sh@478 -- # '[' -n 2381643 ']' 00:13:03.333 15:55:42 -- nvmf/common.sh@479 -- # killprocess 2381643 00:13:03.333 15:55:42 -- common/autotest_common.sh@936 -- # '[' -z 2381643 ']' 00:13:03.333 15:55:42 -- common/autotest_common.sh@940 -- # kill -0 2381643 00:13:03.333 15:55:42 -- common/autotest_common.sh@941 -- # uname 00:13:03.333 15:55:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:03.333 15:55:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2381643 00:13:03.333 15:55:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:03.333 15:55:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:03.333 15:55:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2381643' 00:13:03.333 killing process with pid 2381643 00:13:03.333 15:55:42 -- common/autotest_common.sh@955 -- # kill 2381643 00:13:03.333 15:55:42 -- common/autotest_common.sh@960 -- # wait 2381643 00:13:05.240 15:55:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:05.240 15:55:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:05.240 15:55:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:05.240 15:55:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.240 15:55:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.240 15:55:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.240 15:55:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.240 15:55:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.143 15:55:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:07.143 00:13:07.143 real 0m14.323s 00:13:07.143 user 0m25.094s 00:13:07.143 sys 0m4.856s 00:13:07.143 15:55:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:07.143 15:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:07.143 ************************************ 00:13:07.143 END TEST nvmf_nvme_cli 00:13:07.143 ************************************ 00:13:07.143 15:55:46 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:07.143 15:55:46 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:07.143 15:55:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:07.143 15:55:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.143 15:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:07.143 ************************************ 00:13:07.143 START TEST nvmf_vfio_user 00:13:07.143 ************************************ 00:13:07.143 15:55:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:07.143 * Looking for test storage... 00:13:07.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.402 15:55:46 -- nvmf/common.sh@7 -- # uname -s 00:13:07.402 15:55:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.402 15:55:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.402 15:55:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.402 15:55:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.402 15:55:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.402 15:55:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.402 15:55:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.402 15:55:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.402 15:55:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.402 15:55:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.402 15:55:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:07.402 15:55:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:07.402 15:55:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.402 15:55:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.402 15:55:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.402 15:55:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.402 15:55:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.402 15:55:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.402 15:55:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.402 15:55:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.402 15:55:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.402 15:55:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.402 15:55:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.402 15:55:46 -- paths/export.sh@5 -- # export PATH 00:13:07.402 15:55:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.402 15:55:46 -- nvmf/common.sh@47 -- # : 0 00:13:07.402 15:55:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.402 15:55:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.402 15:55:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.402 15:55:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.402 15:55:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.402 15:55:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.402 15:55:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.402 15:55:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2383190 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2383190' 00:13:07.402 Process pid: 2383190 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:07.402 15:55:46 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2383190 00:13:07.402 15:55:46 -- common/autotest_common.sh@817 -- # '[' -z 2383190 ']' 00:13:07.402 15:55:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.402 15:55:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:07.402 15:55:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.402 15:55:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:07.402 15:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:07.402 [2024-04-26 15:55:46.943768] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:07.402 [2024-04-26 15:55:46.943856] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.402 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.402 [2024-04-26 15:55:47.047666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.671 [2024-04-26 15:55:47.283805] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.671 [2024-04-26 15:55:47.283846] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.671 [2024-04-26 15:55:47.283859] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.671 [2024-04-26 15:55:47.283870] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.671 [2024-04-26 15:55:47.283878] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.671 [2024-04-26 15:55:47.283946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.671 [2024-04-26 15:55:47.283964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.671 [2024-04-26 15:55:47.283986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.671 [2024-04-26 15:55:47.283991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.244 15:55:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:08.244 15:55:47 -- common/autotest_common.sh@850 -- # return 0 00:13:08.244 15:55:47 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:09.179 15:55:48 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:09.439 15:55:48 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:09.439 15:55:48 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:09.439 15:55:48 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:09.439 15:55:48 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:09.439 15:55:48 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:09.697 Malloc1 00:13:09.697 15:55:49 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:09.956 15:55:49 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:09.956 15:55:49 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:10.215 15:55:49 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:10.215 15:55:49 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:10.215 15:55:49 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:10.474 Malloc2 00:13:10.474 15:55:50 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:10.733 15:55:50 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:10.733 15:55:50 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:10.992 15:55:50 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:10.992 15:55:50 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:10.992 15:55:50 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:10.992 15:55:50 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:10.992 15:55:50 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:10.992 15:55:50 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:10.992 [2024-04-26 15:55:50.632925] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:10.992 [2024-04-26 15:55:50.632986] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383898 ] 00:13:10.992 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.254 [2024-04-26 15:55:50.678418] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:11.254 [2024-04-26 15:55:50.686589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:11.254 [2024-04-26 15:55:50.686621] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe788ddf000 00:13:11.254 [2024-04-26 15:55:50.687561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.254 [2024-04-26 15:55:50.688572] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.254 [2024-04-26 15:55:50.689574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.254 [2024-04-26 15:55:50.690578] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.254 [2024-04-26 15:55:50.691580] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.254 [2024-04-26 15:55:50.692589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.254 [2024-04-26 15:55:50.693593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.254 [2024-04-26 15:55:50.694607] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.254 [2024-04-26 15:55:50.695610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:11.254 [2024-04-26 15:55:50.695629] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe788dd4000 00:13:11.254 [2024-04-26 15:55:50.696863] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:11.254 [2024-04-26 15:55:50.712439] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:11.254 [2024-04-26 15:55:50.712473] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:11.254 [2024-04-26 15:55:50.717749] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:11.254 [2024-04-26 15:55:50.717883] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:11.254 [2024-04-26 15:55:50.718749] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:11.254 [2024-04-26 15:55:50.718772] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:11.254 [2024-04-26 15:55:50.718782] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:11.254 [2024-04-26 15:55:50.719747] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:11.254 [2024-04-26 15:55:50.719764] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:11.254 [2024-04-26 15:55:50.719777] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:11.254 [2024-04-26 15:55:50.720750] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:11.254 [2024-04-26 15:55:50.720768] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:11.254 [2024-04-26 15:55:50.720783] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:11.254 [2024-04-26 15:55:50.721746] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:11.254 [2024-04-26 15:55:50.721763] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:11.254 [2024-04-26 15:55:50.722753] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:11.254 [2024-04-26 15:55:50.722768] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:11.254 [2024-04-26 15:55:50.722776] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:11.254 [2024-04-26 15:55:50.722788] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:11.254 [2024-04-26 15:55:50.722897] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:11.254 [2024-04-26 15:55:50.722905] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:11.254 [2024-04-26 15:55:50.722914] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:11.254 [2024-04-26 15:55:50.723764] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:11.254 [2024-04-26 15:55:50.724770] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:11.254 [2024-04-26 15:55:50.725783] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:11.254 [2024-04-26 15:55:50.726776] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:11.254 [2024-04-26 15:55:50.726860] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:11.254 [2024-04-26 15:55:50.727803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:11.254 [2024-04-26 15:55:50.727817] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:11.254 [2024-04-26 15:55:50.727827] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:11.254 [2024-04-26 15:55:50.727851] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:11.254 [2024-04-26 15:55:50.727867] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:11.254 [2024-04-26 15:55:50.727892] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.254 [2024-04-26 15:55:50.727902] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.254 [2024-04-26 15:55:50.727920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.254 [2024-04-26 15:55:50.727980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:11.254 [2024-04-26 15:55:50.727996] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:11.254 [2024-04-26 15:55:50.728007] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:11.254 [2024-04-26 15:55:50.728014] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:11.254 [2024-04-26 15:55:50.728022] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:11.254 [2024-04-26 15:55:50.728030] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:11.255 [2024-04-26 15:55:50.728041] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:11.255 [2024-04-26 15:55:50.728050] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728067] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.255 [2024-04-26 15:55:50.728146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.255 [2024-04-26 15:55:50.728157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.255 [2024-04-26 15:55:50.728169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.255 [2024-04-26 15:55:50.728176] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728188] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728220] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:11.255 [2024-04-26 15:55:50.728229] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728240] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728258] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728344] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728362] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728377] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:11.255 [2024-04-26 15:55:50.728387] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:11.255 [2024-04-26 15:55:50.728397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728444] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:11.255 [2024-04-26 15:55:50.728464] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728479] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728493] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.255 [2024-04-26 15:55:50.728500] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.255 [2024-04-26 15:55:50.728511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728552] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728564] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728577] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.255 [2024-04-26 15:55:50.728584] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.255 [2024-04-26 15:55:50.728596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728628] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728638] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728652] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728660] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728669] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728676] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:11.255 [2024-04-26 15:55:50.728685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:11.255 [2024-04-26 15:55:50.728692] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:11.255 [2024-04-26 15:55:50.728726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.728840] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:11.255 [2024-04-26 15:55:50.728847] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:11.255 [2024-04-26 15:55:50.728855] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:11.255 [2024-04-26 15:55:50.728863] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:11.255 [2024-04-26 15:55:50.728874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:11.255 [2024-04-26 15:55:50.728885] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:11.255 [2024-04-26 15:55:50.728893] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:11.255 [2024-04-26 15:55:50.728904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728916] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:11.255 [2024-04-26 15:55:50.728922] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.255 [2024-04-26 15:55:50.728933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728947] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:11.255 [2024-04-26 15:55:50.728955] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:11.255 [2024-04-26 15:55:50.728965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:11.255 [2024-04-26 15:55:50.728979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.729005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.729024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:11.255 [2024-04-26 15:55:50.729034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:11.255 ===================================================== 00:13:11.255 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:11.255 ===================================================== 00:13:11.255 Controller Capabilities/Features 00:13:11.255 ================================ 00:13:11.255 Vendor ID: 4e58 00:13:11.255 Subsystem Vendor ID: 4e58 00:13:11.255 Serial Number: SPDK1 00:13:11.255 Model Number: SPDK bdev Controller 00:13:11.255 Firmware Version: 24.05 00:13:11.255 Recommended Arb Burst: 6 00:13:11.255 IEEE OUI Identifier: 8d 6b 50 00:13:11.255 Multi-path I/O 00:13:11.255 May have multiple subsystem ports: Yes 00:13:11.255 May have multiple controllers: Yes 00:13:11.255 Associated with SR-IOV VF: No 00:13:11.256 Max Data Transfer Size: 131072 00:13:11.256 Max Number of Namespaces: 32 00:13:11.256 Max Number of I/O Queues: 127 00:13:11.256 NVMe Specification Version (VS): 1.3 00:13:11.256 NVMe Specification Version (Identify): 1.3 00:13:11.256 Maximum Queue Entries: 256 00:13:11.256 Contiguous Queues Required: Yes 00:13:11.256 Arbitration Mechanisms Supported 00:13:11.256 Weighted Round Robin: Not Supported 00:13:11.256 Vendor Specific: Not Supported 00:13:11.256 Reset Timeout: 15000 ms 00:13:11.256 Doorbell Stride: 4 bytes 00:13:11.256 NVM Subsystem Reset: Not Supported 00:13:11.256 Command Sets Supported 00:13:11.256 NVM Command Set: Supported 00:13:11.256 Boot Partition: Not Supported 00:13:11.256 Memory Page Size Minimum: 4096 bytes 00:13:11.256 Memory Page Size Maximum: 4096 bytes 00:13:11.256 Persistent Memory Region: Not Supported 00:13:11.256 Optional Asynchronous Events Supported 00:13:11.256 Namespace Attribute Notices: Supported 00:13:11.256 Firmware Activation Notices: Not Supported 00:13:11.256 ANA Change Notices: Not Supported 00:13:11.256 PLE Aggregate Log Change Notices: Not Supported 00:13:11.256 LBA Status Info Alert Notices: Not Supported 00:13:11.256 EGE Aggregate Log Change Notices: Not Supported 00:13:11.256 Normal NVM Subsystem Shutdown event: Not Supported 00:13:11.256 Zone Descriptor Change Notices: Not Supported 00:13:11.256 Discovery Log Change Notices: Not Supported 00:13:11.256 Controller Attributes 00:13:11.256 128-bit Host Identifier: Supported 00:13:11.256 Non-Operational Permissive Mode: Not Supported 00:13:11.256 NVM Sets: Not Supported 00:13:11.256 Read Recovery Levels: Not Supported 00:13:11.256 Endurance Groups: Not Supported 00:13:11.256 Predictable Latency Mode: Not Supported 00:13:11.256 Traffic Based Keep ALive: Not Supported 00:13:11.256 Namespace Granularity: Not Supported 00:13:11.256 SQ Associations: Not Supported 00:13:11.256 UUID List: Not Supported 00:13:11.256 Multi-Domain Subsystem: Not Supported 00:13:11.256 Fixed Capacity Management: Not Supported 00:13:11.256 Variable Capacity Management: Not Supported 00:13:11.256 Delete Endurance Group: Not Supported 00:13:11.256 Delete NVM Set: Not Supported 00:13:11.256 Extended LBA Formats Supported: Not Supported 00:13:11.256 Flexible Data Placement Supported: Not Supported 00:13:11.256 00:13:11.256 Controller Memory Buffer Support 00:13:11.256 ================================ 00:13:11.256 Supported: No 00:13:11.256 00:13:11.256 Persistent Memory Region Support 00:13:11.256 ================================ 00:13:11.256 Supported: No 00:13:11.256 00:13:11.256 Admin Command Set Attributes 00:13:11.256 ============================ 00:13:11.256 Security Send/Receive: Not Supported 00:13:11.256 Format NVM: Not Supported 00:13:11.256 Firmware Activate/Download: Not Supported 00:13:11.256 Namespace Management: Not Supported 00:13:11.256 Device Self-Test: Not Supported 00:13:11.256 Directives: Not Supported 00:13:11.256 NVMe-MI: Not Supported 00:13:11.256 Virtualization Management: Not Supported 00:13:11.256 Doorbell Buffer Config: Not Supported 00:13:11.256 Get LBA Status Capability: Not Supported 00:13:11.256 Command & Feature Lockdown Capability: Not Supported 00:13:11.256 Abort Command Limit: 4 00:13:11.256 Async Event Request Limit: 4 00:13:11.256 Number of Firmware Slots: N/A 00:13:11.256 Firmware Slot 1 Read-Only: N/A 00:13:11.256 Firmware Activation Without Reset: N/A 00:13:11.256 Multiple Update Detection Support: N/A 00:13:11.256 Firmware Update Granularity: No Information Provided 00:13:11.256 Per-Namespace SMART Log: No 00:13:11.256 Asymmetric Namespace Access Log Page: Not Supported 00:13:11.256 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:11.256 Command Effects Log Page: Supported 00:13:11.256 Get Log Page Extended Data: Supported 00:13:11.256 Telemetry Log Pages: Not Supported 00:13:11.256 Persistent Event Log Pages: Not Supported 00:13:11.256 Supported Log Pages Log Page: May Support 00:13:11.256 Commands Supported & Effects Log Page: Not Supported 00:13:11.256 Feature Identifiers & Effects Log Page:May Support 00:13:11.256 NVMe-MI Commands & Effects Log Page: May Support 00:13:11.256 Data Area 4 for Telemetry Log: Not Supported 00:13:11.256 Error Log Page Entries Supported: 128 00:13:11.256 Keep Alive: Supported 00:13:11.256 Keep Alive Granularity: 10000 ms 00:13:11.256 00:13:11.256 NVM Command Set Attributes 00:13:11.256 ========================== 00:13:11.256 Submission Queue Entry Size 00:13:11.256 Max: 64 00:13:11.256 Min: 64 00:13:11.256 Completion Queue Entry Size 00:13:11.256 Max: 16 00:13:11.256 Min: 16 00:13:11.256 Number of Namespaces: 32 00:13:11.256 Compare Command: Supported 00:13:11.256 Write Uncorrectable Command: Not Supported 00:13:11.256 Dataset Management Command: Supported 00:13:11.256 Write Zeroes Command: Supported 00:13:11.256 Set Features Save Field: Not Supported 00:13:11.256 Reservations: Not Supported 00:13:11.256 Timestamp: Not Supported 00:13:11.256 Copy: Supported 00:13:11.256 Volatile Write Cache: Present 00:13:11.256 Atomic Write Unit (Normal): 1 00:13:11.256 Atomic Write Unit (PFail): 1 00:13:11.256 Atomic Compare & Write Unit: 1 00:13:11.256 Fused Compare & Write: Supported 00:13:11.256 Scatter-Gather List 00:13:11.256 SGL Command Set: Supported (Dword aligned) 00:13:11.256 SGL Keyed: Not Supported 00:13:11.256 SGL Bit Bucket Descriptor: Not Supported 00:13:11.256 SGL Metadata Pointer: Not Supported 00:13:11.256 Oversized SGL: Not Supported 00:13:11.256 SGL Metadata Address: Not Supported 00:13:11.256 SGL Offset: Not Supported 00:13:11.256 Transport SGL Data Block: Not Supported 00:13:11.256 Replay Protected Memory Block: Not Supported 00:13:11.256 00:13:11.256 Firmware Slot Information 00:13:11.256 ========================= 00:13:11.256 Active slot: 1 00:13:11.256 Slot 1 Firmware Revision: 24.05 00:13:11.256 00:13:11.256 00:13:11.256 Commands Supported and Effects 00:13:11.256 ============================== 00:13:11.256 Admin Commands 00:13:11.256 -------------- 00:13:11.256 Get Log Page (02h): Supported 00:13:11.256 Identify (06h): Supported 00:13:11.256 Abort (08h): Supported 00:13:11.256 Set Features (09h): Supported 00:13:11.256 Get Features (0Ah): Supported 00:13:11.256 Asynchronous Event Request (0Ch): Supported 00:13:11.256 Keep Alive (18h): Supported 00:13:11.256 I/O Commands 00:13:11.256 ------------ 00:13:11.256 Flush (00h): Supported LBA-Change 00:13:11.256 Write (01h): Supported LBA-Change 00:13:11.256 Read (02h): Supported 00:13:11.256 Compare (05h): Supported 00:13:11.256 Write Zeroes (08h): Supported LBA-Change 00:13:11.256 Dataset Management (09h): Supported LBA-Change 00:13:11.256 Copy (19h): Supported LBA-Change 00:13:11.256 Unknown (79h): Supported LBA-Change 00:13:11.256 Unknown (7Ah): Supported 00:13:11.256 00:13:11.256 Error Log 00:13:11.256 ========= 00:13:11.256 00:13:11.256 Arbitration 00:13:11.256 =========== 00:13:11.256 Arbitration Burst: 1 00:13:11.256 00:13:11.256 Power Management 00:13:11.256 ================ 00:13:11.256 Number of Power States: 1 00:13:11.256 Current Power State: Power State #0 00:13:11.256 Power State #0: 00:13:11.256 Max Power: 0.00 W 00:13:11.256 Non-Operational State: Operational 00:13:11.256 Entry Latency: Not Reported 00:13:11.256 Exit Latency: Not Reported 00:13:11.256 Relative Read Throughput: 0 00:13:11.256 Relative Read Latency: 0 00:13:11.256 Relative Write Throughput: 0 00:13:11.256 Relative Write Latency: 0 00:13:11.256 Idle Power: Not Reported 00:13:11.256 Active Power: Not Reported 00:13:11.256 Non-Operational Permissive Mode: Not Supported 00:13:11.256 00:13:11.256 Health Information 00:13:11.256 ================== 00:13:11.256 Critical Warnings: 00:13:11.256 Available Spare Space: OK 00:13:11.256 Temperature: OK 00:13:11.256 Device Reliability: OK 00:13:11.256 Read Only: No 00:13:11.256 Volatile Memory Backup: OK 00:13:11.256 Current Temperature: 0 Kelvin (-2[2024-04-26 15:55:50.729190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:11.256 [2024-04-26 15:55:50.729204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:11.256 [2024-04-26 15:55:50.729249] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:11.256 [2024-04-26 15:55:50.729261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.256 [2024-04-26 15:55:50.729272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.256 [2024-04-26 15:55:50.729281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.256 [2024-04-26 15:55:50.729290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.257 [2024-04-26 15:55:50.729805] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:11.257 [2024-04-26 15:55:50.729823] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:11.257 [2024-04-26 15:55:50.730806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:11.257 [2024-04-26 15:55:50.730879] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:11.257 [2024-04-26 15:55:50.730891] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:11.257 [2024-04-26 15:55:50.731813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:11.257 [2024-04-26 15:55:50.731831] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:11.257 [2024-04-26 15:55:50.732636] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:11.257 [2024-04-26 15:55:50.739088] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:11.257 73 Celsius) 00:13:11.257 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:11.257 Available Spare: 0% 00:13:11.257 Available Spare Threshold: 0% 00:13:11.257 Life Percentage Used: 0% 00:13:11.257 Data Units Read: 0 00:13:11.257 Data Units Written: 0 00:13:11.257 Host Read Commands: 0 00:13:11.257 Host Write Commands: 0 00:13:11.257 Controller Busy Time: 0 minutes 00:13:11.257 Power Cycles: 0 00:13:11.257 Power On Hours: 0 hours 00:13:11.257 Unsafe Shutdowns: 0 00:13:11.257 Unrecoverable Media Errors: 0 00:13:11.257 Lifetime Error Log Entries: 0 00:13:11.257 Warning Temperature Time: 0 minutes 00:13:11.257 Critical Temperature Time: 0 minutes 00:13:11.257 00:13:11.257 Number of Queues 00:13:11.257 ================ 00:13:11.257 Number of I/O Submission Queues: 127 00:13:11.257 Number of I/O Completion Queues: 127 00:13:11.257 00:13:11.257 Active Namespaces 00:13:11.257 ================= 00:13:11.257 Namespace ID:1 00:13:11.257 Error Recovery Timeout: Unlimited 00:13:11.257 Command Set Identifier: NVM (00h) 00:13:11.257 Deallocate: Supported 00:13:11.257 Deallocated/Unwritten Error: Not Supported 00:13:11.257 Deallocated Read Value: Unknown 00:13:11.257 Deallocate in Write Zeroes: Not Supported 00:13:11.257 Deallocated Guard Field: 0xFFFF 00:13:11.257 Flush: Supported 00:13:11.257 Reservation: Supported 00:13:11.257 Namespace Sharing Capabilities: Multiple Controllers 00:13:11.257 Size (in LBAs): 131072 (0GiB) 00:13:11.257 Capacity (in LBAs): 131072 (0GiB) 00:13:11.257 Utilization (in LBAs): 131072 (0GiB) 00:13:11.257 NGUID: 8DBE59CB4D9D44719E9EA66A0537D0E5 00:13:11.257 UUID: 8dbe59cb-4d9d-4471-9e9e-a66a0537d0e5 00:13:11.257 Thin Provisioning: Not Supported 00:13:11.257 Per-NS Atomic Units: Yes 00:13:11.257 Atomic Boundary Size (Normal): 0 00:13:11.257 Atomic Boundary Size (PFail): 0 00:13:11.257 Atomic Boundary Offset: 0 00:13:11.257 Maximum Single Source Range Length: 65535 00:13:11.257 Maximum Copy Length: 65535 00:13:11.257 Maximum Source Range Count: 1 00:13:11.257 NGUID/EUI64 Never Reused: No 00:13:11.257 Namespace Write Protected: No 00:13:11.257 Number of LBA Formats: 1 00:13:11.257 Current LBA Format: LBA Format #00 00:13:11.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:11.257 00:13:11.257 15:55:50 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:11.257 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.517 [2024-04-26 15:55:51.047255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:16.793 [2024-04-26 15:55:56.067727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:16.793 Initializing NVMe Controllers 00:13:16.793 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:16.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:16.793 Initialization complete. Launching workers. 00:13:16.793 ======================================================== 00:13:16.793 Latency(us) 00:13:16.793 Device Information : IOPS MiB/s Average min max 00:13:16.793 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39714.21 155.13 3221.94 1131.30 8190.03 00:13:16.793 ======================================================== 00:13:16.793 Total : 39714.21 155.13 3221.94 1131.30 8190.03 00:13:16.793 00:13:16.794 15:55:56 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:16.794 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.794 [2024-04-26 15:55:56.385354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.070 [2024-04-26 15:56:01.424490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.070 Initializing NVMe Controllers 00:13:22.070 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:22.071 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:22.071 Initialization complete. Launching workers. 00:13:22.071 ======================================================== 00:13:22.071 Latency(us) 00:13:22.071 Device Information : IOPS MiB/s Average min max 00:13:22.071 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8009.35 4985.98 15961.33 00:13:22.071 ======================================================== 00:13:22.071 Total : 16000.00 62.50 8009.35 4985.98 15961.33 00:13:22.071 00:13:22.071 15:56:01 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:22.071 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.329 [2024-04-26 15:56:01.771166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:27.623 [2024-04-26 15:56:06.854121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:27.623 Initializing NVMe Controllers 00:13:27.623 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:27.623 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:27.623 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:27.623 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:27.623 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:27.623 Initialization complete. Launching workers. 00:13:27.623 Starting thread on core 2 00:13:27.623 Starting thread on core 3 00:13:27.623 Starting thread on core 1 00:13:27.623 15:56:06 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:27.623 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.623 [2024-04-26 15:56:07.291680] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:31.020 [2024-04-26 15:56:10.456961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:31.020 Initializing NVMe Controllers 00:13:31.020 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:31.020 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:31.020 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:31.020 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:31.020 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:31.020 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:31.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:31.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:31.020 Initialization complete. Launching workers. 00:13:31.020 Starting thread on core 1 with urgent priority queue 00:13:31.020 Starting thread on core 2 with urgent priority queue 00:13:31.020 Starting thread on core 3 with urgent priority queue 00:13:31.020 Starting thread on core 0 with urgent priority queue 00:13:31.020 SPDK bdev Controller (SPDK1 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:13:31.020 SPDK bdev Controller (SPDK1 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:13:31.020 SPDK bdev Controller (SPDK1 ) core 2: 554.67 IO/s 180.29 secs/100000 ios 00:13:31.020 SPDK bdev Controller (SPDK1 ) core 3: 512.00 IO/s 195.31 secs/100000 ios 00:13:31.020 ======================================================== 00:13:31.020 00:13:31.020 15:56:10 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:31.021 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.280 [2024-04-26 15:56:10.896696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:31.280 [2024-04-26 15:56:10.931219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:31.539 Initializing NVMe Controllers 00:13:31.539 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:31.539 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:31.539 Namespace ID: 1 size: 0GB 00:13:31.539 Initialization complete. 00:13:31.539 INFO: using host memory buffer for IO 00:13:31.539 Hello world! 00:13:31.539 15:56:11 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:31.539 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.798 [2024-04-26 15:56:11.360683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:32.736 Initializing NVMe Controllers 00:13:32.736 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.736 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.736 Initialization complete. Launching workers. 00:13:32.736 submit (in ns) avg, min, max = 6750.6, 3537.4, 4002026.1 00:13:32.736 complete (in ns) avg, min, max = 21163.1, 1979.1, 4001703.5 00:13:32.736 00:13:32.736 Submit histogram 00:13:32.736 ================ 00:13:32.736 Range in us Cumulative Count 00:13:32.736 3.534 - 3.548: 0.0068% ( 1) 00:13:32.736 3.548 - 3.562: 0.0608% ( 8) 00:13:32.736 3.562 - 3.590: 2.4464% ( 353) 00:13:32.736 3.590 - 3.617: 9.9953% ( 1117) 00:13:32.736 3.617 - 3.645: 20.7339% ( 1589) 00:13:32.736 3.645 - 3.673: 31.0468% ( 1526) 00:13:32.736 3.673 - 3.701: 40.0351% ( 1330) 00:13:32.736 3.701 - 3.729: 48.8815% ( 1309) 00:13:32.736 3.729 - 3.757: 59.2890% ( 1540) 00:13:32.736 3.757 - 3.784: 69.4735% ( 1507) 00:13:32.736 3.784 - 3.812: 77.0629% ( 1123) 00:13:32.736 3.812 - 3.840: 81.0435% ( 589) 00:13:32.736 3.840 - 3.868: 83.0844% ( 302) 00:13:32.736 3.868 - 3.896: 85.0375% ( 289) 00:13:32.736 3.896 - 3.923: 87.3555% ( 343) 00:13:32.736 3.923 - 3.951: 89.4303% ( 307) 00:13:32.736 3.951 - 3.979: 91.4442% ( 298) 00:13:32.736 3.979 - 4.007: 93.1878% ( 258) 00:13:32.736 4.007 - 4.035: 94.5597% ( 203) 00:13:32.736 4.035 - 4.063: 95.7356% ( 174) 00:13:32.736 4.063 - 4.090: 96.6277% ( 132) 00:13:32.736 4.090 - 4.118: 97.2292% ( 89) 00:13:32.736 4.118 - 4.146: 97.6955% ( 69) 00:13:32.736 4.146 - 4.174: 97.9928% ( 44) 00:13:32.736 4.174 - 4.202: 98.1753% ( 27) 00:13:32.736 4.202 - 4.230: 98.3172% ( 21) 00:13:32.736 4.230 - 4.257: 98.3578% ( 6) 00:13:32.736 4.257 - 4.285: 98.3983% ( 6) 00:13:32.736 4.285 - 4.313: 98.4456% ( 7) 00:13:32.736 4.313 - 4.341: 98.4591% ( 2) 00:13:32.736 4.341 - 4.369: 98.4794% ( 3) 00:13:32.736 4.369 - 4.397: 98.4862% ( 1) 00:13:32.736 4.424 - 4.452: 98.5065% ( 3) 00:13:32.736 4.452 - 4.480: 98.5132% ( 1) 00:13:32.736 4.480 - 4.508: 98.5402% ( 4) 00:13:32.736 4.508 - 4.536: 98.5538% ( 2) 00:13:32.736 4.536 - 4.563: 98.5740% ( 3) 00:13:32.736 4.563 - 4.591: 98.6011% ( 4) 00:13:32.736 4.591 - 4.619: 98.6281% ( 4) 00:13:32.736 4.619 - 4.647: 98.6822% ( 8) 00:13:32.736 4.647 - 4.675: 98.7497% ( 10) 00:13:32.736 4.675 - 4.703: 98.8106% ( 9) 00:13:32.736 4.703 - 4.730: 98.8444% ( 5) 00:13:32.736 4.730 - 4.758: 98.8917% ( 7) 00:13:32.736 4.758 - 4.786: 98.9390% ( 7) 00:13:32.736 4.786 - 4.814: 98.9930% ( 8) 00:13:32.736 4.814 - 4.842: 99.0403% ( 7) 00:13:32.736 4.842 - 4.870: 99.0606% ( 3) 00:13:32.736 4.870 - 4.897: 99.0674% ( 1) 00:13:32.736 4.897 - 4.925: 99.0809% ( 2) 00:13:32.736 4.925 - 4.953: 99.0877% ( 1) 00:13:32.736 4.953 - 4.981: 99.0944% ( 1) 00:13:32.736 4.981 - 5.009: 99.1147% ( 3) 00:13:32.736 5.009 - 5.037: 99.1282% ( 2) 00:13:32.736 5.037 - 5.064: 99.1688% ( 6) 00:13:32.736 5.064 - 5.092: 99.1755% ( 1) 00:13:32.736 5.092 - 5.120: 99.1823% ( 1) 00:13:32.736 5.120 - 5.148: 99.1890% ( 1) 00:13:32.736 5.148 - 5.176: 99.2161% ( 4) 00:13:32.736 5.176 - 5.203: 99.2363% ( 3) 00:13:32.736 5.203 - 5.231: 99.2431% ( 1) 00:13:32.736 5.259 - 5.287: 99.2498% ( 1) 00:13:32.736 5.287 - 5.315: 99.2566% ( 1) 00:13:32.736 5.315 - 5.343: 99.2769% ( 3) 00:13:32.736 5.370 - 5.398: 99.2836% ( 1) 00:13:32.736 5.398 - 5.426: 99.3039% ( 3) 00:13:32.736 5.454 - 5.482: 99.3242% ( 3) 00:13:32.736 5.482 - 5.510: 99.3445% ( 3) 00:13:32.736 5.537 - 5.565: 99.3647% ( 3) 00:13:32.736 5.565 - 5.593: 99.3783% ( 2) 00:13:32.736 5.621 - 5.649: 99.3918% ( 2) 00:13:32.736 5.649 - 5.677: 99.4120% ( 3) 00:13:32.736 5.677 - 5.704: 99.4188% ( 1) 00:13:32.736 5.704 - 5.732: 99.4391% ( 3) 00:13:32.736 5.732 - 5.760: 99.4458% ( 1) 00:13:32.736 5.760 - 5.788: 99.4526% ( 1) 00:13:32.736 5.788 - 5.816: 99.4593% ( 1) 00:13:32.736 5.816 - 5.843: 99.4729% ( 2) 00:13:32.736 5.843 - 5.871: 99.4796% ( 1) 00:13:32.736 5.871 - 5.899: 99.4931% ( 2) 00:13:32.736 5.899 - 5.927: 99.4999% ( 1) 00:13:32.736 5.927 - 5.955: 99.5134% ( 2) 00:13:32.736 5.955 - 5.983: 99.5202% ( 1) 00:13:32.736 5.983 - 6.010: 99.5269% ( 1) 00:13:32.736 6.038 - 6.066: 99.5337% ( 1) 00:13:32.736 6.066 - 6.094: 99.5472% ( 2) 00:13:32.736 6.094 - 6.122: 99.5540% ( 1) 00:13:32.736 6.150 - 6.177: 99.5607% ( 1) 00:13:32.736 6.372 - 6.400: 99.5675% ( 1) 00:13:32.736 6.456 - 6.483: 99.5742% ( 1) 00:13:32.736 6.483 - 6.511: 99.5810% ( 1) 00:13:32.736 6.511 - 6.539: 99.5878% ( 1) 00:13:32.736 6.539 - 6.567: 99.5945% ( 1) 00:13:32.736 6.595 - 6.623: 99.6013% ( 1) 00:13:32.736 6.678 - 6.706: 99.6080% ( 1) 00:13:32.736 6.706 - 6.734: 99.6148% ( 1) 00:13:32.736 6.734 - 6.762: 99.6283% ( 2) 00:13:32.736 6.762 - 6.790: 99.6351% ( 1) 00:13:32.736 6.790 - 6.817: 99.6418% ( 1) 00:13:32.736 6.817 - 6.845: 99.6553% ( 2) 00:13:32.736 6.845 - 6.873: 99.6621% ( 1) 00:13:32.736 6.873 - 6.901: 99.6689% ( 1) 00:13:32.736 6.929 - 6.957: 99.6756% ( 1) 00:13:32.736 6.957 - 6.984: 99.6824% ( 1) 00:13:32.736 6.984 - 7.012: 99.6891% ( 1) 00:13:32.736 7.012 - 7.040: 99.6959% ( 1) 00:13:32.736 7.068 - 7.096: 99.7026% ( 1) 00:13:32.736 7.123 - 7.179: 99.7162% ( 2) 00:13:32.736 7.179 - 7.235: 99.7229% ( 1) 00:13:32.736 7.235 - 7.290: 99.7297% ( 1) 00:13:32.736 7.346 - 7.402: 99.7499% ( 3) 00:13:32.736 7.402 - 7.457: 99.7770% ( 4) 00:13:32.736 7.457 - 7.513: 99.7837% ( 1) 00:13:32.736 7.513 - 7.569: 99.7905% ( 1) 00:13:32.737 7.569 - 7.624: 99.7973% ( 1) 00:13:32.737 7.624 - 7.680: 99.8040% ( 1) 00:13:32.737 7.736 - 7.791: 99.8108% ( 1) 00:13:32.737 7.791 - 7.847: 99.8175% ( 1) 00:13:32.737 7.903 - 7.958: 99.8243% ( 1) 00:13:32.737 8.070 - 8.125: 99.8310% ( 1) 00:13:32.737 8.125 - 8.181: 99.8378% ( 1) 00:13:32.737 8.292 - 8.348: 99.8446% ( 1) 00:13:32.737 8.515 - 8.570: 99.8513% ( 1) 00:13:32.737 8.904 - 8.960: 99.8581% ( 1) 00:13:32.737 9.183 - 9.238: 99.8648% ( 1) 00:13:32.737 10.184 - 10.240: 99.8716% ( 1) 00:13:32.737 10.518 - 10.574: 99.8784% ( 1) 00:13:32.737 10.630 - 10.685: 99.8851% ( 1) 00:13:32.737 11.631 - 11.687: 99.8919% ( 1) 00:13:32.737 12.577 - 12.633: 99.8986% ( 1) 00:13:32.737 12.967 - 13.023: 99.9054% ( 1) 00:13:32.737 13.301 - 13.357: 99.9189% ( 2) 00:13:32.737 17.030 - 17.141: 99.9257% ( 1) 00:13:32.737 3989.148 - 4017.642: 100.0000% ( 11) 00:13:32.737 00:13:32.737 Complete histogram 00:13:32.737 ================== 00:13:32.737 Range in us Cumulative Count 00:13:32.737 1.976 - 1.990: 0.4663% ( 69) 00:13:32.737 1.990 - 2.003: 17.7130% ( 2552) 00:13:32.737 2.003 - 2.017: 56.3831% ( 5722) 00:13:32.737 2.017 - 2.031: 78.1645% ( 3223) 00:13:32.737 2.031 - 2.045: 89.9912% ( 1750) 00:13:32.737 2.045 - 2.059: 95.4247% ( 804) 00:13:32.737 2.059 - 2.073: 97.2630% ( 272) 00:13:32.737 2.073 - 2.087: 97.9320% ( 99) 00:13:32.737 2.087 - 2.101: 98.3510% ( 62) 00:13:32.737 2.101 - 2.115: 98.5673% ( 32) 00:13:32.737 2.115 - 2.129: 98.7024% ( 20) 00:13:32.737 2.129 - 2.143: 98.7565% ( 8) 00:13:32.737 2.143 - 2.157: 98.8106% ( 8) 00:13:32.737 2.157 - 2.170: 98.8444% ( 5) 00:13:32.737 2.170 - 2.184: 98.8511% ( 1) 00:13:32.737 2.198 - 2.212: 98.8714% ( 3) 00:13:32.737 2.212 - 2.226: 98.8849% ( 2) 00:13:32.737 2.254 - 2.268: 98.8984% ( 2) 00:13:32.737 2.268 - 2.282: 98.9187% ( 3) 00:13:32.737 2.282 - 2.296: 98.9322% ( 2) 00:13:32.737 2.296 - 2.310: 98.9457% ( 2) 00:13:32.737 2.310 - 2.323: 98.9525% ( 1) 00:13:32.737 2.323 - 2.337: 98.9592% ( 1) 00:13:32.737 2.337 - 2.351: 98.9728% ( 2) 00:13:32.737 2.351 - 2.365: 98.9863% ( 2) 00:13:32.737 2.365 - 2.379: 98.9930% ( 1) 00:13:32.737 2.393 - 2.407: 99.0201% ( 4) 00:13:32.737 2.449 - 2.463: 99.0268% ( 1) 00:13:32.737 2.490 - 2.504: 99.0403% ( 2) 00:13:32.737 2.518 - 2.532: 99.0471% ( 1) 00:13:32.737 2.574 - 2.5[2024-04-26 15:56:12.384448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:32.996 88: 99.0606% ( 2) 00:13:32.996 2.671 - 2.685: 99.0674% ( 1) 00:13:32.996 2.727 - 2.741: 99.0741% ( 1) 00:13:32.996 2.838 - 2.852: 99.0809% ( 1) 00:13:32.996 2.866 - 2.880: 99.0877% ( 1) 00:13:32.996 2.894 - 2.908: 99.0944% ( 1) 00:13:32.996 2.950 - 2.963: 99.1079% ( 2) 00:13:32.996 2.977 - 2.991: 99.1147% ( 1) 00:13:32.996 2.991 - 3.005: 99.1214% ( 1) 00:13:32.996 3.005 - 3.019: 99.1282% ( 1) 00:13:32.996 3.047 - 3.061: 99.1417% ( 2) 00:13:32.996 3.061 - 3.075: 99.1620% ( 3) 00:13:32.996 3.075 - 3.089: 99.1688% ( 1) 00:13:32.996 3.103 - 3.117: 99.1823% ( 2) 00:13:32.996 3.117 - 3.130: 99.1890% ( 1) 00:13:32.996 3.130 - 3.144: 99.2025% ( 2) 00:13:32.996 3.158 - 3.172: 99.2093% ( 1) 00:13:32.996 3.172 - 3.186: 99.2161% ( 1) 00:13:32.996 3.228 - 3.242: 99.2296% ( 2) 00:13:32.996 3.242 - 3.256: 99.2363% ( 1) 00:13:32.996 3.256 - 3.270: 99.2431% ( 1) 00:13:32.996 3.339 - 3.353: 99.2498% ( 1) 00:13:32.996 3.353 - 3.367: 99.2566% ( 1) 00:13:32.996 3.367 - 3.381: 99.2701% ( 2) 00:13:32.996 3.423 - 3.437: 99.2769% ( 1) 00:13:32.996 3.520 - 3.534: 99.2836% ( 1) 00:13:32.996 3.534 - 3.548: 99.2904% ( 1) 00:13:32.996 3.562 - 3.590: 99.2972% ( 1) 00:13:32.996 3.590 - 3.617: 99.3107% ( 2) 00:13:32.996 3.617 - 3.645: 99.3242% ( 2) 00:13:32.996 3.673 - 3.701: 99.3309% ( 1) 00:13:32.996 3.729 - 3.757: 99.3445% ( 2) 00:13:32.996 3.784 - 3.812: 99.3512% ( 1) 00:13:32.996 3.840 - 3.868: 99.3580% ( 1) 00:13:32.996 3.923 - 3.951: 99.3647% ( 1) 00:13:32.996 4.007 - 4.035: 99.3715% ( 1) 00:13:32.996 4.202 - 4.230: 99.3783% ( 1) 00:13:32.996 4.257 - 4.285: 99.3850% ( 1) 00:13:32.996 4.285 - 4.313: 99.3918% ( 1) 00:13:32.996 4.341 - 4.369: 99.3985% ( 1) 00:13:32.996 4.619 - 4.647: 99.4053% ( 1) 00:13:32.996 4.647 - 4.675: 99.4120% ( 1) 00:13:32.996 4.758 - 4.786: 99.4188% ( 1) 00:13:32.996 4.897 - 4.925: 99.4323% ( 2) 00:13:32.996 5.315 - 5.343: 99.4391% ( 1) 00:13:32.996 5.565 - 5.593: 99.4458% ( 1) 00:13:32.996 6.261 - 6.289: 99.4526% ( 1) 00:13:32.996 6.289 - 6.317: 99.4593% ( 1) 00:13:32.996 6.567 - 6.595: 99.4661% ( 1) 00:13:32.996 7.235 - 7.290: 99.4729% ( 1) 00:13:32.996 9.572 - 9.628: 99.4864% ( 2) 00:13:32.996 9.739 - 9.795: 99.4931% ( 1) 00:13:32.996 10.908 - 10.963: 99.4999% ( 1) 00:13:32.996 11.242 - 11.297: 99.5067% ( 1) 00:13:32.996 14.136 - 14.191: 99.5134% ( 1) 00:13:32.996 40.515 - 40.737: 99.5202% ( 1) 00:13:32.996 3362.282 - 3376.529: 99.5269% ( 1) 00:13:32.996 3989.148 - 4017.642: 100.0000% ( 70) 00:13:32.996 00:13:32.996 15:56:12 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:32.996 15:56:12 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:32.996 15:56:12 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:32.996 15:56:12 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:32.996 15:56:12 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:32.996 [2024-04-26 15:56:12.657786] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:32.996 [ 00:13:32.996 { 00:13:32.996 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:32.996 "subtype": "Discovery", 00:13:32.996 "listen_addresses": [], 00:13:32.996 "allow_any_host": true, 00:13:32.996 "hosts": [] 00:13:32.996 }, 00:13:32.996 { 00:13:32.996 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:32.996 "subtype": "NVMe", 00:13:32.996 "listen_addresses": [ 00:13:32.997 { 00:13:32.997 "transport": "VFIOUSER", 00:13:32.997 "trtype": "VFIOUSER", 00:13:32.997 "adrfam": "IPv4", 00:13:32.997 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:32.997 "trsvcid": "0" 00:13:32.997 } 00:13:32.997 ], 00:13:32.997 "allow_any_host": true, 00:13:32.997 "hosts": [], 00:13:32.997 "serial_number": "SPDK1", 00:13:32.997 "model_number": "SPDK bdev Controller", 00:13:32.997 "max_namespaces": 32, 00:13:32.997 "min_cntlid": 1, 00:13:32.997 "max_cntlid": 65519, 00:13:32.997 "namespaces": [ 00:13:32.997 { 00:13:32.997 "nsid": 1, 00:13:32.997 "bdev_name": "Malloc1", 00:13:32.997 "name": "Malloc1", 00:13:32.997 "nguid": "8DBE59CB4D9D44719E9EA66A0537D0E5", 00:13:32.997 "uuid": "8dbe59cb-4d9d-4471-9e9e-a66a0537d0e5" 00:13:32.997 } 00:13:32.997 ] 00:13:32.997 }, 00:13:32.997 { 00:13:32.997 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:32.997 "subtype": "NVMe", 00:13:32.997 "listen_addresses": [ 00:13:32.997 { 00:13:32.997 "transport": "VFIOUSER", 00:13:32.997 "trtype": "VFIOUSER", 00:13:32.997 "adrfam": "IPv4", 00:13:32.997 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:32.997 "trsvcid": "0" 00:13:32.997 } 00:13:32.997 ], 00:13:32.997 "allow_any_host": true, 00:13:32.997 "hosts": [], 00:13:32.997 "serial_number": "SPDK2", 00:13:32.997 "model_number": "SPDK bdev Controller", 00:13:32.997 "max_namespaces": 32, 00:13:32.997 "min_cntlid": 1, 00:13:32.997 "max_cntlid": 65519, 00:13:32.997 "namespaces": [ 00:13:32.997 { 00:13:32.997 "nsid": 1, 00:13:32.997 "bdev_name": "Malloc2", 00:13:32.997 "name": "Malloc2", 00:13:32.997 "nguid": "8585CC16F75149FD85E920A4E5DC72C4", 00:13:32.997 "uuid": "8585cc16-f751-49fd-85e9-20a4e5dc72c4" 00:13:32.997 } 00:13:32.997 ] 00:13:32.997 } 00:13:32.997 ] 00:13:33.256 15:56:12 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:33.256 15:56:12 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2387552 00:13:33.256 15:56:12 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:33.256 15:56:12 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:33.256 15:56:12 -- common/autotest_common.sh@1251 -- # local i=0 00:13:33.256 15:56:12 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:33.256 15:56:12 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:33.256 15:56:12 -- common/autotest_common.sh@1262 -- # return 0 00:13:33.256 15:56:12 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:33.256 15:56:12 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:33.256 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.515 Malloc3 00:13:33.515 15:56:12 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:33.515 [2024-04-26 15:56:12.991655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.515 [2024-04-26 15:56:13.129817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.515 15:56:13 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:33.775 Asynchronous Event Request test 00:13:33.775 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.775 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.775 Registering asynchronous event callbacks... 00:13:33.775 Starting namespace attribute notice tests for all controllers... 00:13:33.775 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:33.775 aer_cb - Changed Namespace 00:13:33.775 Cleaning up... 00:13:33.775 [ 00:13:33.775 { 00:13:33.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.775 "subtype": "Discovery", 00:13:33.775 "listen_addresses": [], 00:13:33.775 "allow_any_host": true, 00:13:33.775 "hosts": [] 00:13:33.775 }, 00:13:33.775 { 00:13:33.775 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:33.775 "subtype": "NVMe", 00:13:33.775 "listen_addresses": [ 00:13:33.775 { 00:13:33.775 "transport": "VFIOUSER", 00:13:33.775 "trtype": "VFIOUSER", 00:13:33.775 "adrfam": "IPv4", 00:13:33.775 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:33.775 "trsvcid": "0" 00:13:33.775 } 00:13:33.775 ], 00:13:33.775 "allow_any_host": true, 00:13:33.775 "hosts": [], 00:13:33.775 "serial_number": "SPDK1", 00:13:33.775 "model_number": "SPDK bdev Controller", 00:13:33.775 "max_namespaces": 32, 00:13:33.775 "min_cntlid": 1, 00:13:33.775 "max_cntlid": 65519, 00:13:33.775 "namespaces": [ 00:13:33.775 { 00:13:33.775 "nsid": 1, 00:13:33.775 "bdev_name": "Malloc1", 00:13:33.775 "name": "Malloc1", 00:13:33.775 "nguid": "8DBE59CB4D9D44719E9EA66A0537D0E5", 00:13:33.775 "uuid": "8dbe59cb-4d9d-4471-9e9e-a66a0537d0e5" 00:13:33.775 }, 00:13:33.775 { 00:13:33.775 "nsid": 2, 00:13:33.775 "bdev_name": "Malloc3", 00:13:33.775 "name": "Malloc3", 00:13:33.775 "nguid": "08F41B638DC5487CA2BDF45A0A56FBAD", 00:13:33.775 "uuid": "08f41b63-8dc5-487c-a2bd-f45a0a56fbad" 00:13:33.775 } 00:13:33.775 ] 00:13:33.775 }, 00:13:33.775 { 00:13:33.775 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:33.775 "subtype": "NVMe", 00:13:33.775 "listen_addresses": [ 00:13:33.775 { 00:13:33.775 "transport": "VFIOUSER", 00:13:33.775 "trtype": "VFIOUSER", 00:13:33.775 "adrfam": "IPv4", 00:13:33.775 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:33.775 "trsvcid": "0" 00:13:33.775 } 00:13:33.775 ], 00:13:33.775 "allow_any_host": true, 00:13:33.775 "hosts": [], 00:13:33.775 "serial_number": "SPDK2", 00:13:33.775 "model_number": "SPDK bdev Controller", 00:13:33.775 "max_namespaces": 32, 00:13:33.775 "min_cntlid": 1, 00:13:33.775 "max_cntlid": 65519, 00:13:33.775 "namespaces": [ 00:13:33.775 { 00:13:33.775 "nsid": 1, 00:13:33.775 "bdev_name": "Malloc2", 00:13:33.775 "name": "Malloc2", 00:13:33.775 "nguid": "8585CC16F75149FD85E920A4E5DC72C4", 00:13:33.775 "uuid": "8585cc16-f751-49fd-85e9-20a4e5dc72c4" 00:13:33.775 } 00:13:33.775 ] 00:13:33.775 } 00:13:33.775 ] 00:13:33.775 15:56:13 -- target/nvmf_vfio_user.sh@44 -- # wait 2387552 00:13:33.775 15:56:13 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:33.775 15:56:13 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:33.775 15:56:13 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:33.775 15:56:13 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:33.775 [2024-04-26 15:56:13.360636] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:33.775 [2024-04-26 15:56:13.360701] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387593 ] 00:13:33.775 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.775 [2024-04-26 15:56:13.406362] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:33.775 [2024-04-26 15:56:13.408757] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:33.775 [2024-04-26 15:56:13.408788] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcd84aba000 00:13:33.775 [2024-04-26 15:56:13.409754] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.775 [2024-04-26 15:56:13.410776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.775 [2024-04-26 15:56:13.411771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.775 [2024-04-26 15:56:13.412776] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.775 [2024-04-26 15:56:13.413784] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.775 [2024-04-26 15:56:13.414801] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.775 [2024-04-26 15:56:13.415804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.775 [2024-04-26 15:56:13.416811] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.775 [2024-04-26 15:56:13.417825] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:33.775 [2024-04-26 15:56:13.417845] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcd84aaf000 00:13:33.775 [2024-04-26 15:56:13.419079] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:33.775 [2024-04-26 15:56:13.433457] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:33.775 [2024-04-26 15:56:13.433491] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:33.775 [2024-04-26 15:56:13.438616] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:33.775 [2024-04-26 15:56:13.438748] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:33.775 [2024-04-26 15:56:13.439643] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:33.775 [2024-04-26 15:56:13.439666] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:33.775 [2024-04-26 15:56:13.439678] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:33.775 [2024-04-26 15:56:13.439732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:33.775 [2024-04-26 15:56:13.439749] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:33.775 [2024-04-26 15:56:13.439760] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:33.775 [2024-04-26 15:56:13.440750] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:33.775 [2024-04-26 15:56:13.440767] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:33.775 [2024-04-26 15:56:13.440780] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:33.775 [2024-04-26 15:56:13.441756] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:33.775 [2024-04-26 15:56:13.441775] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:33.775 [2024-04-26 15:56:13.442763] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:33.775 [2024-04-26 15:56:13.442779] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:33.775 [2024-04-26 15:56:13.442787] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:33.775 [2024-04-26 15:56:13.442800] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:33.775 [2024-04-26 15:56:13.442909] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:33.775 [2024-04-26 15:56:13.442918] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:33.775 [2024-04-26 15:56:13.442927] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:33.775 [2024-04-26 15:56:13.443774] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:33.775 [2024-04-26 15:56:13.444780] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:33.775 [2024-04-26 15:56:13.445803] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:33.775 [2024-04-26 15:56:13.446802] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:33.775 [2024-04-26 15:56:13.446862] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:33.775 [2024-04-26 15:56:13.447815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:33.775 [2024-04-26 15:56:13.447829] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:33.775 [2024-04-26 15:56:13.447840] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:33.775 [2024-04-26 15:56:13.447864] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:33.775 [2024-04-26 15:56:13.447886] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:33.775 [2024-04-26 15:56:13.447910] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:33.775 [2024-04-26 15:56:13.447920] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.775 [2024-04-26 15:56:13.447937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.775 [2024-04-26 15:56:13.456087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:33.775 [2024-04-26 15:56:13.456113] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:33.775 [2024-04-26 15:56:13.456123] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:33.775 [2024-04-26 15:56:13.456131] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:33.775 [2024-04-26 15:56:13.456140] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:33.775 [2024-04-26 15:56:13.456147] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:33.775 [2024-04-26 15:56:13.456164] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:33.775 [2024-04-26 15:56:13.456174] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:33.775 [2024-04-26 15:56:13.456190] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:33.776 [2024-04-26 15:56:13.456207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:34.036 [2024-04-26 15:56:13.464083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:34.036 [2024-04-26 15:56:13.464114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.036 [2024-04-26 15:56:13.464131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.036 [2024-04-26 15:56:13.464142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.036 [2024-04-26 15:56:13.464155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.036 [2024-04-26 15:56:13.464162] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.464176] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.464188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:34.036 [2024-04-26 15:56:13.472083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:34.036 [2024-04-26 15:56:13.472105] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:34.036 [2024-04-26 15:56:13.472116] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.472129] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.472150] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.472163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:34.036 [2024-04-26 15:56:13.480083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:34.036 [2024-04-26 15:56:13.480156] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.480176] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.480190] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:34.036 [2024-04-26 15:56:13.480201] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:34.036 [2024-04-26 15:56:13.480212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:34.036 [2024-04-26 15:56:13.488084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:34.036 [2024-04-26 15:56:13.488124] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:34.036 [2024-04-26 15:56:13.488142] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.488156] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.488174] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:34.036 [2024-04-26 15:56:13.488181] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.036 [2024-04-26 15:56:13.488194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.036 [2024-04-26 15:56:13.496082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:34.036 [2024-04-26 15:56:13.496119] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.496132] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.496152] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:34.036 [2024-04-26 15:56:13.496159] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.036 [2024-04-26 15:56:13.496173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.036 [2024-04-26 15:56:13.504080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:34.036 [2024-04-26 15:56:13.504111] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.504124] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:34.036 [2024-04-26 15:56:13.504136] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:34.037 [2024-04-26 15:56:13.504146] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:34.037 [2024-04-26 15:56:13.504155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:34.037 [2024-04-26 15:56:13.504162] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:34.037 [2024-04-26 15:56:13.504170] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:34.037 [2024-04-26 15:56:13.504178] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:34.037 [2024-04-26 15:56:13.504215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:34.037 [2024-04-26 15:56:13.512083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:34.037 [2024-04-26 15:56:13.512115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:34.037 [2024-04-26 15:56:13.520080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:34.037 [2024-04-26 15:56:13.520110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:34.037 [2024-04-26 15:56:13.528084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:34.037 [2024-04-26 15:56:13.528112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:34.037 [2024-04-26 15:56:13.536080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:34.037 [2024-04-26 15:56:13.536127] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:34.037 [2024-04-26 15:56:13.536136] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:34.037 [2024-04-26 15:56:13.536144] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:34.037 [2024-04-26 15:56:13.536152] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:34.037 [2024-04-26 15:56:13.536166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:34.037 [2024-04-26 15:56:13.536177] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:34.037 [2024-04-26 15:56:13.536186] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:34.037 [2024-04-26 15:56:13.536195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:34.037 [2024-04-26 15:56:13.536207] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:34.037 [2024-04-26 15:56:13.536214] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.037 [2024-04-26 15:56:13.536224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.037 [2024-04-26 15:56:13.536237] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:34.037 [2024-04-26 15:56:13.536245] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:34.037 [2024-04-26 15:56:13.536256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:34.037 [2024-04-26 15:56:13.544086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:34.037 [2024-04-26 15:56:13.544118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:34.037 [2024-04-26 15:56:13.544137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:34.037 [2024-04-26 15:56:13.544147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:34.037 ===================================================== 00:13:34.037 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:34.037 ===================================================== 00:13:34.037 Controller Capabilities/Features 00:13:34.037 ================================ 00:13:34.037 Vendor ID: 4e58 00:13:34.037 Subsystem Vendor ID: 4e58 00:13:34.037 Serial Number: SPDK2 00:13:34.037 Model Number: SPDK bdev Controller 00:13:34.037 Firmware Version: 24.05 00:13:34.037 Recommended Arb Burst: 6 00:13:34.037 IEEE OUI Identifier: 8d 6b 50 00:13:34.037 Multi-path I/O 00:13:34.037 May have multiple subsystem ports: Yes 00:13:34.037 May have multiple controllers: Yes 00:13:34.037 Associated with SR-IOV VF: No 00:13:34.037 Max Data Transfer Size: 131072 00:13:34.037 Max Number of Namespaces: 32 00:13:34.037 Max Number of I/O Queues: 127 00:13:34.037 NVMe Specification Version (VS): 1.3 00:13:34.037 NVMe Specification Version (Identify): 1.3 00:13:34.037 Maximum Queue Entries: 256 00:13:34.037 Contiguous Queues Required: Yes 00:13:34.037 Arbitration Mechanisms Supported 00:13:34.037 Weighted Round Robin: Not Supported 00:13:34.037 Vendor Specific: Not Supported 00:13:34.037 Reset Timeout: 15000 ms 00:13:34.037 Doorbell Stride: 4 bytes 00:13:34.037 NVM Subsystem Reset: Not Supported 00:13:34.037 Command Sets Supported 00:13:34.037 NVM Command Set: Supported 00:13:34.037 Boot Partition: Not Supported 00:13:34.037 Memory Page Size Minimum: 4096 bytes 00:13:34.037 Memory Page Size Maximum: 4096 bytes 00:13:34.037 Persistent Memory Region: Not Supported 00:13:34.037 Optional Asynchronous Events Supported 00:13:34.037 Namespace Attribute Notices: Supported 00:13:34.037 Firmware Activation Notices: Not Supported 00:13:34.037 ANA Change Notices: Not Supported 00:13:34.037 PLE Aggregate Log Change Notices: Not Supported 00:13:34.037 LBA Status Info Alert Notices: Not Supported 00:13:34.037 EGE Aggregate Log Change Notices: Not Supported 00:13:34.037 Normal NVM Subsystem Shutdown event: Not Supported 00:13:34.037 Zone Descriptor Change Notices: Not Supported 00:13:34.037 Discovery Log Change Notices: Not Supported 00:13:34.037 Controller Attributes 00:13:34.037 128-bit Host Identifier: Supported 00:13:34.037 Non-Operational Permissive Mode: Not Supported 00:13:34.037 NVM Sets: Not Supported 00:13:34.037 Read Recovery Levels: Not Supported 00:13:34.037 Endurance Groups: Not Supported 00:13:34.037 Predictable Latency Mode: Not Supported 00:13:34.037 Traffic Based Keep ALive: Not Supported 00:13:34.037 Namespace Granularity: Not Supported 00:13:34.037 SQ Associations: Not Supported 00:13:34.037 UUID List: Not Supported 00:13:34.037 Multi-Domain Subsystem: Not Supported 00:13:34.037 Fixed Capacity Management: Not Supported 00:13:34.037 Variable Capacity Management: Not Supported 00:13:34.037 Delete Endurance Group: Not Supported 00:13:34.037 Delete NVM Set: Not Supported 00:13:34.037 Extended LBA Formats Supported: Not Supported 00:13:34.037 Flexible Data Placement Supported: Not Supported 00:13:34.037 00:13:34.037 Controller Memory Buffer Support 00:13:34.037 ================================ 00:13:34.037 Supported: No 00:13:34.037 00:13:34.037 Persistent Memory Region Support 00:13:34.037 ================================ 00:13:34.037 Supported: No 00:13:34.037 00:13:34.037 Admin Command Set Attributes 00:13:34.037 ============================ 00:13:34.037 Security Send/Receive: Not Supported 00:13:34.037 Format NVM: Not Supported 00:13:34.037 Firmware Activate/Download: Not Supported 00:13:34.037 Namespace Management: Not Supported 00:13:34.037 Device Self-Test: Not Supported 00:13:34.037 Directives: Not Supported 00:13:34.037 NVMe-MI: Not Supported 00:13:34.037 Virtualization Management: Not Supported 00:13:34.037 Doorbell Buffer Config: Not Supported 00:13:34.037 Get LBA Status Capability: Not Supported 00:13:34.037 Command & Feature Lockdown Capability: Not Supported 00:13:34.037 Abort Command Limit: 4 00:13:34.037 Async Event Request Limit: 4 00:13:34.037 Number of Firmware Slots: N/A 00:13:34.037 Firmware Slot 1 Read-Only: N/A 00:13:34.037 Firmware Activation Without Reset: N/A 00:13:34.037 Multiple Update Detection Support: N/A 00:13:34.037 Firmware Update Granularity: No Information Provided 00:13:34.037 Per-Namespace SMART Log: No 00:13:34.037 Asymmetric Namespace Access Log Page: Not Supported 00:13:34.037 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:34.037 Command Effects Log Page: Supported 00:13:34.037 Get Log Page Extended Data: Supported 00:13:34.037 Telemetry Log Pages: Not Supported 00:13:34.037 Persistent Event Log Pages: Not Supported 00:13:34.037 Supported Log Pages Log Page: May Support 00:13:34.037 Commands Supported & Effects Log Page: Not Supported 00:13:34.037 Feature Identifiers & Effects Log Page:May Support 00:13:34.037 NVMe-MI Commands & Effects Log Page: May Support 00:13:34.037 Data Area 4 for Telemetry Log: Not Supported 00:13:34.037 Error Log Page Entries Supported: 128 00:13:34.037 Keep Alive: Supported 00:13:34.037 Keep Alive Granularity: 10000 ms 00:13:34.037 00:13:34.038 NVM Command Set Attributes 00:13:34.038 ========================== 00:13:34.038 Submission Queue Entry Size 00:13:34.038 Max: 64 00:13:34.038 Min: 64 00:13:34.038 Completion Queue Entry Size 00:13:34.038 Max: 16 00:13:34.038 Min: 16 00:13:34.038 Number of Namespaces: 32 00:13:34.038 Compare Command: Supported 00:13:34.038 Write Uncorrectable Command: Not Supported 00:13:34.038 Dataset Management Command: Supported 00:13:34.038 Write Zeroes Command: Supported 00:13:34.038 Set Features Save Field: Not Supported 00:13:34.038 Reservations: Not Supported 00:13:34.038 Timestamp: Not Supported 00:13:34.038 Copy: Supported 00:13:34.038 Volatile Write Cache: Present 00:13:34.038 Atomic Write Unit (Normal): 1 00:13:34.038 Atomic Write Unit (PFail): 1 00:13:34.038 Atomic Compare & Write Unit: 1 00:13:34.038 Fused Compare & Write: Supported 00:13:34.038 Scatter-Gather List 00:13:34.038 SGL Command Set: Supported (Dword aligned) 00:13:34.038 SGL Keyed: Not Supported 00:13:34.038 SGL Bit Bucket Descriptor: Not Supported 00:13:34.038 SGL Metadata Pointer: Not Supported 00:13:34.038 Oversized SGL: Not Supported 00:13:34.038 SGL Metadata Address: Not Supported 00:13:34.038 SGL Offset: Not Supported 00:13:34.038 Transport SGL Data Block: Not Supported 00:13:34.038 Replay Protected Memory Block: Not Supported 00:13:34.038 00:13:34.038 Firmware Slot Information 00:13:34.038 ========================= 00:13:34.038 Active slot: 1 00:13:34.038 Slot 1 Firmware Revision: 24.05 00:13:34.038 00:13:34.038 00:13:34.038 Commands Supported and Effects 00:13:34.038 ============================== 00:13:34.038 Admin Commands 00:13:34.038 -------------- 00:13:34.038 Get Log Page (02h): Supported 00:13:34.038 Identify (06h): Supported 00:13:34.038 Abort (08h): Supported 00:13:34.038 Set Features (09h): Supported 00:13:34.038 Get Features (0Ah): Supported 00:13:34.038 Asynchronous Event Request (0Ch): Supported 00:13:34.038 Keep Alive (18h): Supported 00:13:34.038 I/O Commands 00:13:34.038 ------------ 00:13:34.038 Flush (00h): Supported LBA-Change 00:13:34.038 Write (01h): Supported LBA-Change 00:13:34.038 Read (02h): Supported 00:13:34.038 Compare (05h): Supported 00:13:34.038 Write Zeroes (08h): Supported LBA-Change 00:13:34.038 Dataset Management (09h): Supported LBA-Change 00:13:34.038 Copy (19h): Supported LBA-Change 00:13:34.038 Unknown (79h): Supported LBA-Change 00:13:34.038 Unknown (7Ah): Supported 00:13:34.038 00:13:34.038 Error Log 00:13:34.038 ========= 00:13:34.038 00:13:34.038 Arbitration 00:13:34.038 =========== 00:13:34.038 Arbitration Burst: 1 00:13:34.038 00:13:34.038 Power Management 00:13:34.038 ================ 00:13:34.038 Number of Power States: 1 00:13:34.038 Current Power State: Power State #0 00:13:34.038 Power State #0: 00:13:34.038 Max Power: 0.00 W 00:13:34.038 Non-Operational State: Operational 00:13:34.038 Entry Latency: Not Reported 00:13:34.038 Exit Latency: Not Reported 00:13:34.038 Relative Read Throughput: 0 00:13:34.038 Relative Read Latency: 0 00:13:34.038 Relative Write Throughput: 0 00:13:34.038 Relative Write Latency: 0 00:13:34.038 Idle Power: Not Reported 00:13:34.038 Active Power: Not Reported 00:13:34.038 Non-Operational Permissive Mode: Not Supported 00:13:34.038 00:13:34.038 Health Information 00:13:34.038 ================== 00:13:34.038 Critical Warnings: 00:13:34.038 Available Spare Space: OK 00:13:34.038 Temperature: OK 00:13:34.038 Device Reliability: OK 00:13:34.038 Read Only: No 00:13:34.038 Volatile Memory Backup: OK 00:13:34.038 Current Temperature: 0 Kelvin (-2[2024-04-26 15:56:13.544298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:34.038 [2024-04-26 15:56:13.552082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:34.038 [2024-04-26 15:56:13.552141] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:34.038 [2024-04-26 15:56:13.552156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.038 [2024-04-26 15:56:13.552167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.038 [2024-04-26 15:56:13.552176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.038 [2024-04-26 15:56:13.552186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.038 [2024-04-26 15:56:13.552264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:34.038 [2024-04-26 15:56:13.552283] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:34.038 [2024-04-26 15:56:13.553267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:34.038 [2024-04-26 15:56:13.553329] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:34.038 [2024-04-26 15:56:13.553342] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:34.038 [2024-04-26 15:56:13.554271] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:34.038 [2024-04-26 15:56:13.554292] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:34.038 [2024-04-26 15:56:13.555036] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:34.038 [2024-04-26 15:56:13.556174] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:34.038 73 Celsius) 00:13:34.038 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:34.038 Available Spare: 0% 00:13:34.038 Available Spare Threshold: 0% 00:13:34.038 Life Percentage Used: 0% 00:13:34.038 Data Units Read: 0 00:13:34.038 Data Units Written: 0 00:13:34.038 Host Read Commands: 0 00:13:34.038 Host Write Commands: 0 00:13:34.038 Controller Busy Time: 0 minutes 00:13:34.038 Power Cycles: 0 00:13:34.038 Power On Hours: 0 hours 00:13:34.038 Unsafe Shutdowns: 0 00:13:34.038 Unrecoverable Media Errors: 0 00:13:34.038 Lifetime Error Log Entries: 0 00:13:34.038 Warning Temperature Time: 0 minutes 00:13:34.038 Critical Temperature Time: 0 minutes 00:13:34.038 00:13:34.038 Number of Queues 00:13:34.038 ================ 00:13:34.038 Number of I/O Submission Queues: 127 00:13:34.038 Number of I/O Completion Queues: 127 00:13:34.038 00:13:34.038 Active Namespaces 00:13:34.038 ================= 00:13:34.038 Namespace ID:1 00:13:34.038 Error Recovery Timeout: Unlimited 00:13:34.038 Command Set Identifier: NVM (00h) 00:13:34.038 Deallocate: Supported 00:13:34.038 Deallocated/Unwritten Error: Not Supported 00:13:34.038 Deallocated Read Value: Unknown 00:13:34.038 Deallocate in Write Zeroes: Not Supported 00:13:34.038 Deallocated Guard Field: 0xFFFF 00:13:34.038 Flush: Supported 00:13:34.038 Reservation: Supported 00:13:34.038 Namespace Sharing Capabilities: Multiple Controllers 00:13:34.038 Size (in LBAs): 131072 (0GiB) 00:13:34.038 Capacity (in LBAs): 131072 (0GiB) 00:13:34.038 Utilization (in LBAs): 131072 (0GiB) 00:13:34.038 NGUID: 8585CC16F75149FD85E920A4E5DC72C4 00:13:34.038 UUID: 8585cc16-f751-49fd-85e9-20a4e5dc72c4 00:13:34.038 Thin Provisioning: Not Supported 00:13:34.038 Per-NS Atomic Units: Yes 00:13:34.038 Atomic Boundary Size (Normal): 0 00:13:34.038 Atomic Boundary Size (PFail): 0 00:13:34.038 Atomic Boundary Offset: 0 00:13:34.038 Maximum Single Source Range Length: 65535 00:13:34.038 Maximum Copy Length: 65535 00:13:34.038 Maximum Source Range Count: 1 00:13:34.038 NGUID/EUI64 Never Reused: No 00:13:34.038 Namespace Write Protected: No 00:13:34.038 Number of LBA Formats: 1 00:13:34.038 Current LBA Format: LBA Format #00 00:13:34.038 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:34.038 00:13:34.038 15:56:13 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:34.038 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.307 [2024-04-26 15:56:13.872906] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.583 [2024-04-26 15:56:18.980034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.583 Initializing NVMe Controllers 00:13:39.583 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:39.583 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:39.583 Initialization complete. Launching workers. 00:13:39.583 ======================================================== 00:13:39.583 Latency(us) 00:13:39.583 Device Information : IOPS MiB/s Average min max 00:13:39.583 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39933.22 155.99 3205.03 1126.24 6650.59 00:13:39.583 ======================================================== 00:13:39.583 Total : 39933.22 155.99 3205.03 1126.24 6650.59 00:13:39.583 00:13:39.583 15:56:19 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:39.583 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.842 [2024-04-26 15:56:19.310458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.119 [2024-04-26 15:56:24.334726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.119 Initializing NVMe Controllers 00:13:45.119 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:45.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:45.119 Initialization complete. Launching workers. 00:13:45.119 ======================================================== 00:13:45.119 Latency(us) 00:13:45.119 Device Information : IOPS MiB/s Average min max 00:13:45.119 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 38802.74 151.57 3297.73 1144.62 7222.12 00:13:45.119 ======================================================== 00:13:45.119 Total : 38802.74 151.57 3297.73 1144.62 7222.12 00:13:45.119 00:13:45.119 15:56:24 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:45.119 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.119 [2024-04-26 15:56:24.661069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.402 [2024-04-26 15:56:29.812180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.402 Initializing NVMe Controllers 00:13:50.402 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.402 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.402 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:50.402 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:50.402 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:50.402 Initialization complete. Launching workers. 00:13:50.402 Starting thread on core 2 00:13:50.402 Starting thread on core 3 00:13:50.402 Starting thread on core 1 00:13:50.402 15:56:29 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:50.402 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.662 [2024-04-26 15:56:30.244698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:53.955 [2024-04-26 15:56:33.420485] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:53.955 Initializing NVMe Controllers 00:13:53.955 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:53.955 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:53.955 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:53.955 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:53.955 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:53.955 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:53.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:53.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:53.955 Initialization complete. Launching workers. 00:13:53.955 Starting thread on core 1 with urgent priority queue 00:13:53.955 Starting thread on core 2 with urgent priority queue 00:13:53.955 Starting thread on core 3 with urgent priority queue 00:13:53.955 Starting thread on core 0 with urgent priority queue 00:13:53.955 SPDK bdev Controller (SPDK2 ) core 0: 618.67 IO/s 161.64 secs/100000 ios 00:13:53.955 SPDK bdev Controller (SPDK2 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:13:53.955 SPDK bdev Controller (SPDK2 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:13:53.955 SPDK bdev Controller (SPDK2 ) core 3: 512.00 IO/s 195.31 secs/100000 ios 00:13:53.955 ======================================================== 00:13:53.955 00:13:53.955 15:56:33 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:53.955 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.214 [2024-04-26 15:56:33.859698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.214 [2024-04-26 15:56:33.871123] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.473 Initializing NVMe Controllers 00:13:54.473 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.473 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.473 Namespace ID: 1 size: 0GB 00:13:54.473 Initialization complete. 00:13:54.473 INFO: using host memory buffer for IO 00:13:54.473 Hello world! 00:13:54.473 15:56:33 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:54.474 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.733 [2024-04-26 15:56:34.291768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.113 Initializing NVMe Controllers 00:13:56.113 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.113 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.113 Initialization complete. Launching workers. 00:13:56.113 submit (in ns) avg, min, max = 6226.1, 3547.8, 4001453.9 00:13:56.113 complete (in ns) avg, min, max = 24985.3, 1977.4, 6991253.0 00:13:56.113 00:13:56.113 Submit histogram 00:13:56.113 ================ 00:13:56.113 Range in us Cumulative Count 00:13:56.113 3.548 - 3.562: 0.2323% ( 34) 00:13:56.113 3.562 - 3.590: 4.7408% ( 660) 00:13:56.113 3.590 - 3.617: 14.0037% ( 1356) 00:13:56.113 3.617 - 3.645: 24.3391% ( 1513) 00:13:56.113 3.645 - 3.673: 33.8958% ( 1399) 00:13:56.113 3.673 - 3.701: 42.3595% ( 1239) 00:13:56.113 3.701 - 3.729: 51.0896% ( 1278) 00:13:56.113 3.729 - 3.757: 61.6504% ( 1546) 00:13:56.113 3.757 - 3.784: 71.8082% ( 1487) 00:13:56.113 3.784 - 3.812: 79.1516% ( 1075) 00:13:56.113 3.812 - 3.840: 82.2665% ( 456) 00:13:56.113 3.840 - 3.868: 84.0973% ( 268) 00:13:56.113 3.868 - 3.896: 86.2559% ( 316) 00:13:56.113 3.896 - 3.923: 88.7151% ( 360) 00:13:56.113 3.923 - 3.951: 90.8327% ( 310) 00:13:56.113 3.951 - 3.979: 92.3697% ( 225) 00:13:56.113 3.979 - 4.007: 94.1048% ( 254) 00:13:56.113 4.007 - 4.035: 95.4915% ( 203) 00:13:56.113 4.035 - 4.063: 96.4205% ( 136) 00:13:56.113 4.063 - 4.090: 97.1514% ( 107) 00:13:56.113 4.090 - 4.118: 97.6638% ( 75) 00:13:56.113 4.118 - 4.146: 97.9029% ( 35) 00:13:56.113 4.146 - 4.174: 98.0873% ( 27) 00:13:56.113 4.174 - 4.202: 98.3059% ( 32) 00:13:56.113 4.202 - 4.230: 98.4084% ( 15) 00:13:56.113 4.230 - 4.257: 98.4835% ( 11) 00:13:56.113 4.257 - 4.285: 98.5040% ( 3) 00:13:56.113 4.285 - 4.313: 98.5313% ( 4) 00:13:56.113 4.369 - 4.397: 98.5382% ( 1) 00:13:56.113 4.480 - 4.508: 98.5450% ( 1) 00:13:56.113 4.508 - 4.536: 98.5518% ( 1) 00:13:56.113 4.563 - 4.591: 98.5655% ( 2) 00:13:56.113 4.619 - 4.647: 98.5791% ( 2) 00:13:56.113 4.647 - 4.675: 98.6201% ( 6) 00:13:56.113 4.675 - 4.703: 98.6406% ( 3) 00:13:56.113 4.703 - 4.730: 98.6816% ( 6) 00:13:56.113 4.730 - 4.758: 98.7226% ( 6) 00:13:56.113 4.758 - 4.786: 98.7499% ( 4) 00:13:56.113 4.786 - 4.814: 98.7704% ( 3) 00:13:56.113 4.814 - 4.842: 98.8046% ( 5) 00:13:56.113 4.842 - 4.870: 98.8455% ( 6) 00:13:56.113 4.870 - 4.897: 98.9002% ( 8) 00:13:56.113 4.897 - 4.925: 98.9275% ( 4) 00:13:56.113 4.925 - 4.953: 98.9548% ( 4) 00:13:56.113 4.953 - 4.981: 98.9685% ( 2) 00:13:56.113 4.981 - 5.009: 99.0095% ( 6) 00:13:56.113 5.009 - 5.037: 99.0505% ( 6) 00:13:56.113 5.037 - 5.064: 99.0778% ( 4) 00:13:56.113 5.064 - 5.092: 99.0915% ( 2) 00:13:56.113 5.092 - 5.120: 99.1051% ( 2) 00:13:56.113 5.120 - 5.148: 99.1188% ( 2) 00:13:56.113 5.148 - 5.176: 99.1325% ( 2) 00:13:56.113 5.203 - 5.231: 99.1393% ( 1) 00:13:56.113 5.231 - 5.259: 99.1529% ( 2) 00:13:56.113 5.259 - 5.287: 99.1734% ( 3) 00:13:56.113 5.315 - 5.343: 99.1803% ( 1) 00:13:56.113 5.343 - 5.370: 99.2076% ( 4) 00:13:56.113 5.370 - 5.398: 99.2281% ( 3) 00:13:56.113 5.398 - 5.426: 99.2622% ( 5) 00:13:56.113 5.426 - 5.454: 99.2896% ( 4) 00:13:56.113 5.454 - 5.482: 99.2964% ( 1) 00:13:56.113 5.482 - 5.510: 99.3237% ( 4) 00:13:56.113 5.510 - 5.537: 99.3374% ( 2) 00:13:56.113 5.537 - 5.565: 99.3510% ( 2) 00:13:56.113 5.565 - 5.593: 99.3579% ( 1) 00:13:56.113 5.593 - 5.621: 99.3784% ( 3) 00:13:56.113 5.621 - 5.649: 99.3989% ( 3) 00:13:56.113 5.649 - 5.677: 99.4057% ( 1) 00:13:56.113 5.677 - 5.704: 99.4194% ( 2) 00:13:56.113 5.704 - 5.732: 99.4399% ( 3) 00:13:56.113 5.732 - 5.760: 99.4535% ( 2) 00:13:56.113 5.760 - 5.788: 99.4808% ( 4) 00:13:56.113 5.788 - 5.816: 99.4945% ( 2) 00:13:56.113 5.816 - 5.843: 99.5013% ( 1) 00:13:56.113 5.843 - 5.871: 99.5150% ( 2) 00:13:56.113 5.871 - 5.899: 99.5287% ( 2) 00:13:56.113 5.899 - 5.927: 99.5355% ( 1) 00:13:56.113 5.927 - 5.955: 99.5423% ( 1) 00:13:56.113 5.955 - 5.983: 99.5491% ( 1) 00:13:56.113 5.983 - 6.010: 99.5696% ( 3) 00:13:56.113 6.038 - 6.066: 99.5765% ( 1) 00:13:56.113 6.261 - 6.289: 99.5901% ( 2) 00:13:56.113 6.317 - 6.344: 99.5970% ( 1) 00:13:56.113 6.372 - 6.400: 99.6038% ( 1) 00:13:56.113 6.428 - 6.456: 99.6106% ( 1) 00:13:56.113 6.456 - 6.483: 99.6175% ( 1) 00:13:56.113 6.511 - 6.539: 99.6243% ( 1) 00:13:56.113 6.539 - 6.567: 99.6311% ( 1) 00:13:56.113 6.595 - 6.623: 99.6380% ( 1) 00:13:56.113 6.650 - 6.678: 99.6448% ( 1) 00:13:56.113 6.678 - 6.706: 99.6516% ( 1) 00:13:56.113 6.790 - 6.817: 99.6653% ( 2) 00:13:56.113 6.817 - 6.845: 99.6721% ( 1) 00:13:56.113 6.901 - 6.929: 99.6858% ( 2) 00:13:56.113 6.957 - 6.984: 99.6994% ( 2) 00:13:56.113 7.068 - 7.096: 99.7063% ( 1) 00:13:56.113 7.123 - 7.179: 99.7336% ( 4) 00:13:56.113 7.179 - 7.235: 99.7404% ( 1) 00:13:56.113 7.290 - 7.346: 99.7541% ( 2) 00:13:56.113 7.402 - 7.457: 99.7677% ( 2) 00:13:56.113 7.569 - 7.624: 99.7746% ( 1) 00:13:56.113 7.624 - 7.680: 99.7814% ( 1) 00:13:56.113 7.680 - 7.736: 99.7882% ( 1) 00:13:56.113 7.736 - 7.791: 99.7951% ( 1) 00:13:56.113 7.791 - 7.847: 99.8019% ( 1) 00:13:56.113 7.847 - 7.903: 99.8224% ( 3) 00:13:56.113 7.958 - 8.014: 99.8292% ( 1) 00:13:56.113 8.014 - 8.070: 99.8429% ( 2) 00:13:56.113 8.070 - 8.125: 99.8497% ( 1) 00:13:56.113 8.125 - 8.181: 99.8565% ( 1) 00:13:56.113 8.237 - 8.292: 99.8770% ( 3) 00:13:56.114 8.403 - 8.459: 99.8839% ( 1) 00:13:56.114 8.515 - 8.570: 99.8907% ( 1) 00:13:56.114 8.570 - 8.626: 99.8975% ( 1) 00:13:56.114 8.682 - 8.737: 99.9044% ( 1) 00:13:56.114 8.793 - 8.849: 99.9180% ( 2) 00:13:56.114 9.127 - 9.183: 99.9249% ( 1) 00:13:56.114 9.183 - 9.238: 99.9317% ( 1) 00:13:56.114 19.478 - 19.590: 99.9385% ( 1) 00:13:56.114 3989.148 - 4017.642: 100.0000% ( 9) 00:13:56.114 00:13:56.114 Complete histogram 00:13:56.114 ================== 00:13:56.114 Range in us Cumulative Count 00:13:56.114 1.976 - 1.990: 0.6626% ( 97) 00:13:56.114 1.990 - 2.003: 18.3482% ( 2589) 00:13:56.114 2.003 - 2.017: 56.6500% ( 5607) 00:13:56.114 2.017 - 2.031: 78.1679% ( 3150) 00:13:56.114 2.031 - 2.045: 89.5348% ( 1664) 00:13:56.114 2.045 - 2.059: 95.1841% ( 827) 00:13:56.114 2.059 - 2.073: 97.1105% ( 282) 00:13:56.114 2.073 - 2.087: 97.9848% ( 128) 00:13:56.114 2.087 - 2.101: 98.5108% ( 77) 00:13:56.114 2.101 - 2.115: 98.7226% ( 31) 00:13:56.114 2.115 - 2.129: 98.8319% ( 16) 00:13:56.114 2.129 - 2.143: 98.8524% ( 3) 00:13:56.114 2.143 - 2.157: 98.8729% ( 3) 00:13:56.114 2.157 - 2.170: 98.8797% ( 1) 00:13:56.114 2.170 - 2.184: 98.8865% ( 1) 00:13:56.114 2.184 - 2.198: 98.9070% ( 3) 00:13:56.114 2.198 - 2.212: 98.9207% ( 2) 00:13:56.114 2.212 - 2.226: 98.9275% ( 1) 00:13:56.114 2.296 - 2.310: 98.9344% ( 1) 00:13:56.114 2.337 - 2.351: 98.9412% ( 1) 00:13:56.114 2.351 - 2.365: 98.9548% ( 2) 00:13:56.114 2.365 - 2.379: 98.9685% ( 2) 00:13:56.114 2.379 - 2.393: 98.9753% ( 1) 00:13:56.114 2.449 - 2.463: 98.9890% ( 2) 00:13:56.114 2.463 - 2.477: 98.9958% ( 1) 00:13:56.114 2.477 - 2.490: 99.0027% ( 1) 00:13:56.114 2.560 - 2.574: 99.0095% ( 1) 00:13:56.114 2.602 - 2.616: 99.0163% ( 1) 00:13:56.114 2.741 - 2.755: 99.0232% ( 1) 00:13:56.114 2.824 - 2.838: 99.0300% ( 1) 00:13:56.114 2.852 - 2.866: 99.0368% ( 1) 00:13:56.114 2.866 - 2.880: 99.0437% ( 1) 00:13:56.114 2.908 - 2.922: 99.0505% ( 1) 00:13:56.114 2.963 - 2.977: 99.0573% ( 1) 00:13:56.114 2.991 - 3.005: 99.0641% ( 1) 00:13:56.114 3.019 - 3.033: 99.0710% ( 1) 00:13:56.114 3.117 - 3.130: 99.0778% ( 1) 00:13:56.114 3.172 - 3.186: 99.0846% ( 1) 00:13:56.114 3.283 - 3.297: 99.0983% ( 2) 00:13:56.114 3.325 - 3.339: 99.1051% ( 1) 00:13:56.114 3.339 - 3.353: 99.1120% ( 1) 00:13:56.114 3.353 - 3.367: 99.1256% ( 2) 00:13:56.114 3.367 - 3.381: 99.1325% ( 1) 00:13:56.114 3.437 - 3.450: 99.1461% ( 2) 00:13:56.114 3.506 - 3.5[2024-04-26 15:56:35.394974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.114 20: 99.1529% ( 1) 00:13:56.114 3.673 - 3.701: 99.1598% ( 1) 00:13:56.114 3.701 - 3.729: 99.1803% ( 3) 00:13:56.114 3.729 - 3.757: 99.1871% ( 1) 00:13:56.114 3.757 - 3.784: 99.2008% ( 2) 00:13:56.114 3.784 - 3.812: 99.2076% ( 1) 00:13:56.114 3.812 - 3.840: 99.2144% ( 1) 00:13:56.114 3.868 - 3.896: 99.2213% ( 1) 00:13:56.114 4.035 - 4.063: 99.2281% ( 1) 00:13:56.114 4.063 - 4.090: 99.2349% ( 1) 00:13:56.114 4.090 - 4.118: 99.2418% ( 1) 00:13:56.114 4.118 - 4.146: 99.2486% ( 1) 00:13:56.114 4.202 - 4.230: 99.2554% ( 1) 00:13:56.114 4.257 - 4.285: 99.2622% ( 1) 00:13:56.114 4.730 - 4.758: 99.2691% ( 1) 00:13:56.114 4.758 - 4.786: 99.2759% ( 1) 00:13:56.114 4.842 - 4.870: 99.2827% ( 1) 00:13:56.114 5.092 - 5.120: 99.2896% ( 1) 00:13:56.114 5.231 - 5.259: 99.2964% ( 1) 00:13:56.114 5.287 - 5.315: 99.3032% ( 1) 00:13:56.114 5.370 - 5.398: 99.3101% ( 1) 00:13:56.114 5.510 - 5.537: 99.3169% ( 1) 00:13:56.114 5.788 - 5.816: 99.3237% ( 1) 00:13:56.114 5.816 - 5.843: 99.3306% ( 1) 00:13:56.114 5.871 - 5.899: 99.3374% ( 1) 00:13:56.114 6.010 - 6.038: 99.3442% ( 1) 00:13:56.114 6.094 - 6.122: 99.3510% ( 1) 00:13:56.114 6.177 - 6.205: 99.3579% ( 1) 00:13:56.114 6.233 - 6.261: 99.3647% ( 1) 00:13:56.114 6.428 - 6.456: 99.3715% ( 1) 00:13:56.114 6.706 - 6.734: 99.3784% ( 1) 00:13:56.114 6.734 - 6.762: 99.3852% ( 1) 00:13:56.114 6.817 - 6.845: 99.3920% ( 1) 00:13:56.114 7.123 - 7.179: 99.3989% ( 1) 00:13:56.114 7.290 - 7.346: 99.4057% ( 1) 00:13:56.114 7.569 - 7.624: 99.4125% ( 1) 00:13:56.114 8.403 - 8.459: 99.4194% ( 1) 00:13:56.114 13.969 - 14.024: 99.4262% ( 1) 00:13:56.114 1040.028 - 1047.151: 99.4330% ( 1) 00:13:56.114 3305.294 - 3319.541: 99.4399% ( 1) 00:13:56.114 3989.148 - 4017.642: 99.9863% ( 80) 00:13:56.114 4986.435 - 5014.929: 99.9932% ( 1) 00:13:56.114 6981.009 - 7009.503: 100.0000% ( 1) 00:13:56.114 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.114 [ 00:13:56.114 { 00:13:56.114 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.114 "subtype": "Discovery", 00:13:56.114 "listen_addresses": [], 00:13:56.114 "allow_any_host": true, 00:13:56.114 "hosts": [] 00:13:56.114 }, 00:13:56.114 { 00:13:56.114 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:56.114 "subtype": "NVMe", 00:13:56.114 "listen_addresses": [ 00:13:56.114 { 00:13:56.114 "transport": "VFIOUSER", 00:13:56.114 "trtype": "VFIOUSER", 00:13:56.114 "adrfam": "IPv4", 00:13:56.114 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:56.114 "trsvcid": "0" 00:13:56.114 } 00:13:56.114 ], 00:13:56.114 "allow_any_host": true, 00:13:56.114 "hosts": [], 00:13:56.114 "serial_number": "SPDK1", 00:13:56.114 "model_number": "SPDK bdev Controller", 00:13:56.114 "max_namespaces": 32, 00:13:56.114 "min_cntlid": 1, 00:13:56.114 "max_cntlid": 65519, 00:13:56.114 "namespaces": [ 00:13:56.114 { 00:13:56.114 "nsid": 1, 00:13:56.114 "bdev_name": "Malloc1", 00:13:56.114 "name": "Malloc1", 00:13:56.114 "nguid": "8DBE59CB4D9D44719E9EA66A0537D0E5", 00:13:56.114 "uuid": "8dbe59cb-4d9d-4471-9e9e-a66a0537d0e5" 00:13:56.114 }, 00:13:56.114 { 00:13:56.114 "nsid": 2, 00:13:56.114 "bdev_name": "Malloc3", 00:13:56.114 "name": "Malloc3", 00:13:56.114 "nguid": "08F41B638DC5487CA2BDF45A0A56FBAD", 00:13:56.114 "uuid": "08f41b63-8dc5-487c-a2bd-f45a0a56fbad" 00:13:56.114 } 00:13:56.114 ] 00:13:56.114 }, 00:13:56.114 { 00:13:56.114 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:56.114 "subtype": "NVMe", 00:13:56.114 "listen_addresses": [ 00:13:56.114 { 00:13:56.114 "transport": "VFIOUSER", 00:13:56.114 "trtype": "VFIOUSER", 00:13:56.114 "adrfam": "IPv4", 00:13:56.114 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:56.114 "trsvcid": "0" 00:13:56.114 } 00:13:56.114 ], 00:13:56.114 "allow_any_host": true, 00:13:56.114 "hosts": [], 00:13:56.114 "serial_number": "SPDK2", 00:13:56.114 "model_number": "SPDK bdev Controller", 00:13:56.114 "max_namespaces": 32, 00:13:56.114 "min_cntlid": 1, 00:13:56.114 "max_cntlid": 65519, 00:13:56.114 "namespaces": [ 00:13:56.114 { 00:13:56.114 "nsid": 1, 00:13:56.114 "bdev_name": "Malloc2", 00:13:56.114 "name": "Malloc2", 00:13:56.114 "nguid": "8585CC16F75149FD85E920A4E5DC72C4", 00:13:56.114 "uuid": "8585cc16-f751-49fd-85e9-20a4e5dc72c4" 00:13:56.114 } 00:13:56.114 ] 00:13:56.114 } 00:13:56.114 ] 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2391280 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:56.114 15:56:35 -- common/autotest_common.sh@1251 -- # local i=0 00:13:56.114 15:56:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:56.114 15:56:35 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:56.114 15:56:35 -- common/autotest_common.sh@1262 -- # return 0 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:56.114 15:56:35 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:56.114 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.374 Malloc4 00:13:56.374 15:56:35 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:56.374 [2024-04-26 15:56:35.998151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.634 [2024-04-26 15:56:36.130226] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.634 15:56:36 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.634 Asynchronous Event Request test 00:13:56.634 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.634 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.634 Registering asynchronous event callbacks... 00:13:56.634 Starting namespace attribute notice tests for all controllers... 00:13:56.634 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:56.634 aer_cb - Changed Namespace 00:13:56.634 Cleaning up... 00:13:56.634 [ 00:13:56.634 { 00:13:56.634 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.634 "subtype": "Discovery", 00:13:56.634 "listen_addresses": [], 00:13:56.634 "allow_any_host": true, 00:13:56.634 "hosts": [] 00:13:56.634 }, 00:13:56.634 { 00:13:56.634 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:56.634 "subtype": "NVMe", 00:13:56.634 "listen_addresses": [ 00:13:56.634 { 00:13:56.634 "transport": "VFIOUSER", 00:13:56.634 "trtype": "VFIOUSER", 00:13:56.634 "adrfam": "IPv4", 00:13:56.634 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:56.634 "trsvcid": "0" 00:13:56.634 } 00:13:56.634 ], 00:13:56.634 "allow_any_host": true, 00:13:56.634 "hosts": [], 00:13:56.634 "serial_number": "SPDK1", 00:13:56.634 "model_number": "SPDK bdev Controller", 00:13:56.634 "max_namespaces": 32, 00:13:56.634 "min_cntlid": 1, 00:13:56.634 "max_cntlid": 65519, 00:13:56.634 "namespaces": [ 00:13:56.634 { 00:13:56.634 "nsid": 1, 00:13:56.634 "bdev_name": "Malloc1", 00:13:56.634 "name": "Malloc1", 00:13:56.634 "nguid": "8DBE59CB4D9D44719E9EA66A0537D0E5", 00:13:56.634 "uuid": "8dbe59cb-4d9d-4471-9e9e-a66a0537d0e5" 00:13:56.634 }, 00:13:56.634 { 00:13:56.634 "nsid": 2, 00:13:56.634 "bdev_name": "Malloc3", 00:13:56.634 "name": "Malloc3", 00:13:56.634 "nguid": "08F41B638DC5487CA2BDF45A0A56FBAD", 00:13:56.634 "uuid": "08f41b63-8dc5-487c-a2bd-f45a0a56fbad" 00:13:56.634 } 00:13:56.634 ] 00:13:56.634 }, 00:13:56.634 { 00:13:56.634 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:56.634 "subtype": "NVMe", 00:13:56.634 "listen_addresses": [ 00:13:56.634 { 00:13:56.634 "transport": "VFIOUSER", 00:13:56.634 "trtype": "VFIOUSER", 00:13:56.634 "adrfam": "IPv4", 00:13:56.634 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:56.634 "trsvcid": "0" 00:13:56.634 } 00:13:56.634 ], 00:13:56.634 "allow_any_host": true, 00:13:56.634 "hosts": [], 00:13:56.634 "serial_number": "SPDK2", 00:13:56.634 "model_number": "SPDK bdev Controller", 00:13:56.634 "max_namespaces": 32, 00:13:56.634 "min_cntlid": 1, 00:13:56.634 "max_cntlid": 65519, 00:13:56.634 "namespaces": [ 00:13:56.634 { 00:13:56.634 "nsid": 1, 00:13:56.634 "bdev_name": "Malloc2", 00:13:56.634 "name": "Malloc2", 00:13:56.634 "nguid": "8585CC16F75149FD85E920A4E5DC72C4", 00:13:56.634 "uuid": "8585cc16-f751-49fd-85e9-20a4e5dc72c4" 00:13:56.634 }, 00:13:56.634 { 00:13:56.634 "nsid": 2, 00:13:56.634 "bdev_name": "Malloc4", 00:13:56.634 "name": "Malloc4", 00:13:56.634 "nguid": "36F07D874F014E3791133C8C28AE8018", 00:13:56.634 "uuid": "36f07d87-4f01-4e37-9113-3c8c28ae8018" 00:13:56.634 } 00:13:56.634 ] 00:13:56.634 } 00:13:56.634 ] 00:13:56.893 15:56:36 -- target/nvmf_vfio_user.sh@44 -- # wait 2391280 00:13:56.893 15:56:36 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:56.893 15:56:36 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2383190 00:13:56.893 15:56:36 -- common/autotest_common.sh@936 -- # '[' -z 2383190 ']' 00:13:56.893 15:56:36 -- common/autotest_common.sh@940 -- # kill -0 2383190 00:13:56.893 15:56:36 -- common/autotest_common.sh@941 -- # uname 00:13:56.893 15:56:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:56.893 15:56:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2383190 00:13:56.893 15:56:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:56.893 15:56:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:56.893 15:56:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2383190' 00:13:56.893 killing process with pid 2383190 00:13:56.893 15:56:36 -- common/autotest_common.sh@955 -- # kill 2383190 00:13:56.893 [2024-04-26 15:56:36.385175] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:56.893 15:56:36 -- common/autotest_common.sh@960 -- # wait 2383190 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2391747 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2391747' 00:13:58.823 Process pid: 2391747 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:58.823 15:56:38 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2391747 00:13:58.823 15:56:38 -- common/autotest_common.sh@817 -- # '[' -z 2391747 ']' 00:13:58.823 15:56:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.823 15:56:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:58.823 15:56:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.823 15:56:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:58.823 15:56:38 -- common/autotest_common.sh@10 -- # set +x 00:13:59.082 [2024-04-26 15:56:38.507239] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:59.082 [2024-04-26 15:56:38.509200] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:59.082 [2024-04-26 15:56:38.509267] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.082 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.082 [2024-04-26 15:56:38.613497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.341 [2024-04-26 15:56:38.831494] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.341 [2024-04-26 15:56:38.831537] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.341 [2024-04-26 15:56:38.831549] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.341 [2024-04-26 15:56:38.831558] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.341 [2024-04-26 15:56:38.831570] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.341 [2024-04-26 15:56:38.831665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.342 [2024-04-26 15:56:38.831749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.342 [2024-04-26 15:56:38.831812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.342 [2024-04-26 15:56:38.831821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.601 [2024-04-26 15:56:39.227331] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:59.601 [2024-04-26 15:56:39.228502] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:59.601 [2024-04-26 15:56:39.229853] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:59.601 [2024-04-26 15:56:39.230818] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:59.601 [2024-04-26 15:56:39.230954] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:59.860 15:56:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:59.860 15:56:39 -- common/autotest_common.sh@850 -- # return 0 00:13:59.860 15:56:39 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:00.798 15:56:40 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:00.798 15:56:40 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:00.798 15:56:40 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:00.798 15:56:40 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:00.798 15:56:40 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:01.057 15:56:40 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:01.057 Malloc1 00:14:01.316 15:56:40 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:01.316 15:56:40 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:01.576 15:56:41 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:01.835 15:56:41 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:01.835 15:56:41 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:01.835 15:56:41 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:01.835 Malloc2 00:14:02.094 15:56:41 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:02.094 15:56:41 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:02.353 15:56:41 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:02.611 15:56:42 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:02.611 15:56:42 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2391747 00:14:02.611 15:56:42 -- common/autotest_common.sh@936 -- # '[' -z 2391747 ']' 00:14:02.611 15:56:42 -- common/autotest_common.sh@940 -- # kill -0 2391747 00:14:02.611 15:56:42 -- common/autotest_common.sh@941 -- # uname 00:14:02.611 15:56:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:02.611 15:56:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2391747 00:14:02.611 15:56:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:02.611 15:56:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:02.611 15:56:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2391747' 00:14:02.611 killing process with pid 2391747 00:14:02.611 15:56:42 -- common/autotest_common.sh@955 -- # kill 2391747 00:14:02.611 15:56:42 -- common/autotest_common.sh@960 -- # wait 2391747 00:14:04.517 15:56:43 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:04.517 15:56:43 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:04.517 00:14:04.517 real 0m57.005s 00:14:04.517 user 3m36.639s 00:14:04.517 sys 0m4.517s 00:14:04.517 15:56:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:04.517 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:04.517 ************************************ 00:14:04.517 END TEST nvmf_vfio_user 00:14:04.517 ************************************ 00:14:04.517 15:56:43 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:04.517 15:56:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:04.517 15:56:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.517 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:04.517 ************************************ 00:14:04.517 START TEST nvmf_vfio_user_nvme_compliance 00:14:04.517 ************************************ 00:14:04.517 15:56:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:04.517 * Looking for test storage... 00:14:04.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:04.517 15:56:43 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.517 15:56:43 -- nvmf/common.sh@7 -- # uname -s 00:14:04.517 15:56:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.517 15:56:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.517 15:56:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.517 15:56:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.517 15:56:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.517 15:56:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.517 15:56:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.517 15:56:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.517 15:56:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.517 15:56:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.517 15:56:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.517 15:56:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:04.517 15:56:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.517 15:56:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.517 15:56:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.517 15:56:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.517 15:56:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.517 15:56:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.517 15:56:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.517 15:56:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.517 15:56:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.517 15:56:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.517 15:56:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.517 15:56:43 -- paths/export.sh@5 -- # export PATH 00:14:04.517 15:56:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.517 15:56:43 -- nvmf/common.sh@47 -- # : 0 00:14:04.517 15:56:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.517 15:56:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.517 15:56:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.517 15:56:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.517 15:56:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.517 15:56:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.517 15:56:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.517 15:56:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.517 15:56:44 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.517 15:56:44 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.517 15:56:44 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:04.517 15:56:44 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:04.517 15:56:44 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:04.517 15:56:44 -- compliance/compliance.sh@20 -- # nvmfpid=2392751 00:14:04.517 15:56:44 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2392751' 00:14:04.517 Process pid: 2392751 00:14:04.517 15:56:44 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:04.517 15:56:44 -- compliance/compliance.sh@24 -- # waitforlisten 2392751 00:14:04.517 15:56:44 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:04.517 15:56:44 -- common/autotest_common.sh@817 -- # '[' -z 2392751 ']' 00:14:04.517 15:56:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.517 15:56:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:04.517 15:56:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.517 15:56:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:04.517 15:56:44 -- common/autotest_common.sh@10 -- # set +x 00:14:04.517 [2024-04-26 15:56:44.085455] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:04.517 [2024-04-26 15:56:44.085542] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.517 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.517 [2024-04-26 15:56:44.189673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:04.777 [2024-04-26 15:56:44.401884] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.777 [2024-04-26 15:56:44.401932] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.777 [2024-04-26 15:56:44.401942] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.777 [2024-04-26 15:56:44.401952] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.777 [2024-04-26 15:56:44.401965] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.777 [2024-04-26 15:56:44.402095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.777 [2024-04-26 15:56:44.402156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.777 [2024-04-26 15:56:44.402162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.346 15:56:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:05.346 15:56:44 -- common/autotest_common.sh@850 -- # return 0 00:14:05.346 15:56:44 -- compliance/compliance.sh@26 -- # sleep 1 00:14:06.284 15:56:45 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:06.284 15:56:45 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:06.284 15:56:45 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:06.284 15:56:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.284 15:56:45 -- common/autotest_common.sh@10 -- # set +x 00:14:06.284 15:56:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.284 15:56:45 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:06.284 15:56:45 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:06.284 15:56:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.284 15:56:45 -- common/autotest_common.sh@10 -- # set +x 00:14:06.544 malloc0 00:14:06.544 15:56:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.544 15:56:45 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:06.544 15:56:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.544 15:56:45 -- common/autotest_common.sh@10 -- # set +x 00:14:06.544 15:56:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.544 15:56:46 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:06.544 15:56:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.544 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:06.544 15:56:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.544 15:56:46 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:06.544 15:56:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.544 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:06.544 15:56:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.544 15:56:46 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:06.544 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.544 00:14:06.544 00:14:06.544 CUnit - A unit testing framework for C - Version 2.1-3 00:14:06.544 http://cunit.sourceforge.net/ 00:14:06.544 00:14:06.544 00:14:06.544 Suite: nvme_compliance 00:14:06.803 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 15:56:46.249832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.803 [2024-04-26 15:56:46.251332] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:06.803 [2024-04-26 15:56:46.251356] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:06.803 [2024-04-26 15:56:46.251368] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:06.803 [2024-04-26 15:56:46.252858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.803 passed 00:14:06.803 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 15:56:46.363737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.803 [2024-04-26 15:56:46.366765] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.803 passed 00:14:06.803 Test: admin_identify_ns ...[2024-04-26 15:56:46.476243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.061 [2024-04-26 15:56:46.536091] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:07.061 [2024-04-26 15:56:46.544098] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:07.061 [2024-04-26 15:56:46.565200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.061 passed 00:14:07.062 Test: admin_get_features_mandatory_features ...[2024-04-26 15:56:46.675469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.062 [2024-04-26 15:56:46.678493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.062 passed 00:14:07.321 Test: admin_get_features_optional_features ...[2024-04-26 15:56:46.789325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.321 [2024-04-26 15:56:46.792344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.321 passed 00:14:07.321 Test: admin_set_features_number_of_queues ...[2024-04-26 15:56:46.903442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.580 [2024-04-26 15:56:47.011040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.580 passed 00:14:07.580 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 15:56:47.122360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.580 [2024-04-26 15:56:47.127405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.580 passed 00:14:07.580 Test: admin_get_log_page_with_lpo ...[2024-04-26 15:56:47.235516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.839 [2024-04-26 15:56:47.303091] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:07.839 [2024-04-26 15:56:47.319218] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.839 passed 00:14:07.839 Test: fabric_property_get ...[2024-04-26 15:56:47.427549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:07.839 [2024-04-26 15:56:47.428869] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:07.839 [2024-04-26 15:56:47.430575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:07.839 passed 00:14:08.098 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 15:56:47.543425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.098 [2024-04-26 15:56:47.544758] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:08.098 [2024-04-26 15:56:47.548457] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.098 passed 00:14:08.098 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 15:56:47.658502] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.098 [2024-04-26 15:56:47.744086] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:08.098 [2024-04-26 15:56:47.760083] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:08.098 [2024-04-26 15:56:47.765810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.358 passed 00:14:08.358 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 15:56:47.875466] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.358 [2024-04-26 15:56:47.876790] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:08.358 [2024-04-26 15:56:47.878497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.358 passed 00:14:08.358 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 15:56:47.988464] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.617 [2024-04-26 15:56:48.066095] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:08.617 [2024-04-26 15:56:48.090086] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:08.617 [2024-04-26 15:56:48.095874] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.617 passed 00:14:08.617 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 15:56:48.206097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.617 [2024-04-26 15:56:48.207473] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:08.617 [2024-04-26 15:56:48.207510] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:08.617 [2024-04-26 15:56:48.209128] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.617 passed 00:14:08.877 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 15:56:48.320565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.877 [2024-04-26 15:56:48.414089] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:08.877 [2024-04-26 15:56:48.422079] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:08.877 [2024-04-26 15:56:48.430088] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:08.877 [2024-04-26 15:56:48.438084] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:08.877 [2024-04-26 15:56:48.467904] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.877 passed 00:14:09.136 Test: admin_create_io_sq_verify_pc ...[2024-04-26 15:56:48.580295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.136 [2024-04-26 15:56:48.596120] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:09.136 [2024-04-26 15:56:48.613949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.136 passed 00:14:09.136 Test: admin_create_io_qp_max_qps ...[2024-04-26 15:56:48.725858] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.606 [2024-04-26 15:56:49.808430] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:10.606 [2024-04-26 15:56:50.238975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.891 passed 00:14:10.891 Test: admin_create_io_sq_shared_cq ...[2024-04-26 15:56:50.348606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.891 [2024-04-26 15:56:50.482101] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:10.891 [2024-04-26 15:56:50.519188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:11.149 passed 00:14:11.149 00:14:11.149 Run Summary: Type Total Ran Passed Failed Inactive 00:14:11.149 suites 1 1 n/a 0 0 00:14:11.149 tests 18 18 18 0 0 00:14:11.149 asserts 360 360 360 0 n/a 00:14:11.149 00:14:11.149 Elapsed time = 1.839 seconds 00:14:11.149 15:56:50 -- compliance/compliance.sh@42 -- # killprocess 2392751 00:14:11.149 15:56:50 -- common/autotest_common.sh@936 -- # '[' -z 2392751 ']' 00:14:11.149 15:56:50 -- common/autotest_common.sh@940 -- # kill -0 2392751 00:14:11.149 15:56:50 -- common/autotest_common.sh@941 -- # uname 00:14:11.149 15:56:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.149 15:56:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2392751 00:14:11.149 15:56:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:11.149 15:56:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:11.149 15:56:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2392751' 00:14:11.149 killing process with pid 2392751 00:14:11.149 15:56:50 -- common/autotest_common.sh@955 -- # kill 2392751 00:14:11.149 15:56:50 -- common/autotest_common.sh@960 -- # wait 2392751 00:14:12.527 15:56:52 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:12.527 15:56:52 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:12.527 00:14:12.527 real 0m8.266s 00:14:12.527 user 0m22.401s 00:14:12.527 sys 0m0.644s 00:14:12.527 15:56:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:12.527 15:56:52 -- common/autotest_common.sh@10 -- # set +x 00:14:12.527 ************************************ 00:14:12.527 END TEST nvmf_vfio_user_nvme_compliance 00:14:12.527 ************************************ 00:14:12.527 15:56:52 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:12.527 15:56:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:12.527 15:56:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.527 15:56:52 -- common/autotest_common.sh@10 -- # set +x 00:14:12.786 ************************************ 00:14:12.787 START TEST nvmf_vfio_user_fuzz 00:14:12.787 ************************************ 00:14:12.787 15:56:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:12.787 * Looking for test storage... 00:14:12.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.787 15:56:52 -- nvmf/common.sh@7 -- # uname -s 00:14:12.787 15:56:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.787 15:56:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.787 15:56:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.787 15:56:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.787 15:56:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.787 15:56:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.787 15:56:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.787 15:56:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.787 15:56:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.787 15:56:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.787 15:56:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:12.787 15:56:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:12.787 15:56:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.787 15:56:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.787 15:56:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.787 15:56:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.787 15:56:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.787 15:56:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.787 15:56:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.787 15:56:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.787 15:56:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.787 15:56:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.787 15:56:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.787 15:56:52 -- paths/export.sh@5 -- # export PATH 00:14:12.787 15:56:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.787 15:56:52 -- nvmf/common.sh@47 -- # : 0 00:14:12.787 15:56:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.787 15:56:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.787 15:56:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.787 15:56:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.787 15:56:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.787 15:56:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.787 15:56:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.787 15:56:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2394197 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2394197' 00:14:12.787 Process pid: 2394197 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:12.787 15:56:52 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2394197 00:14:12.787 15:56:52 -- common/autotest_common.sh@817 -- # '[' -z 2394197 ']' 00:14:12.787 15:56:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.787 15:56:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:12.787 15:56:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.787 15:56:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:12.787 15:56:52 -- common/autotest_common.sh@10 -- # set +x 00:14:13.726 15:56:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:13.726 15:56:53 -- common/autotest_common.sh@850 -- # return 0 00:14:13.726 15:56:53 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:14.665 15:56:54 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:14.665 15:56:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.665 15:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:14.665 15:56:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.665 15:56:54 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:14.665 15:56:54 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:14.665 15:56:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.665 15:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:14.925 malloc0 00:14:14.925 15:56:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.925 15:56:54 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:14.925 15:56:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.925 15:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:14.925 15:56:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.925 15:56:54 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:14.925 15:56:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.925 15:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:14.925 15:56:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.925 15:56:54 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:14.925 15:56:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.925 15:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:14.925 15:56:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.925 15:56:54 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:14.925 15:56:54 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:47.015 Fuzzing completed. Shutting down the fuzz application 00:14:47.015 00:14:47.015 Dumping successful admin opcodes: 00:14:47.015 8, 9, 10, 24, 00:14:47.015 Dumping successful io opcodes: 00:14:47.015 0, 00:14:47.015 NS: 0x200003a1eec0 I/O qp, Total commands completed: 816787, total successful commands: 3159, random_seed: 2890479872 00:14:47.015 NS: 0x200003a1eec0 admin qp, Total commands completed: 202143, total successful commands: 1617, random_seed: 3868496448 00:14:47.015 15:57:25 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:47.015 15:57:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:47.015 15:57:25 -- common/autotest_common.sh@10 -- # set +x 00:14:47.015 15:57:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:47.015 15:57:25 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2394197 00:14:47.015 15:57:25 -- common/autotest_common.sh@936 -- # '[' -z 2394197 ']' 00:14:47.015 15:57:25 -- common/autotest_common.sh@940 -- # kill -0 2394197 00:14:47.015 15:57:25 -- common/autotest_common.sh@941 -- # uname 00:14:47.015 15:57:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.015 15:57:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2394197 00:14:47.015 15:57:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:47.015 15:57:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:47.015 15:57:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2394197' 00:14:47.015 killing process with pid 2394197 00:14:47.015 15:57:25 -- common/autotest_common.sh@955 -- # kill 2394197 00:14:47.015 15:57:25 -- common/autotest_common.sh@960 -- # wait 2394197 00:14:47.585 15:57:27 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:47.585 15:57:27 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:47.585 00:14:47.585 real 0m34.808s 00:14:47.585 user 0m38.753s 00:14:47.585 sys 0m26.438s 00:14:47.585 15:57:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:47.585 15:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:47.585 ************************************ 00:14:47.585 END TEST nvmf_vfio_user_fuzz 00:14:47.585 ************************************ 00:14:47.585 15:57:27 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:47.585 15:57:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:47.585 15:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.585 15:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:47.845 ************************************ 00:14:47.845 START TEST nvmf_host_management 00:14:47.845 ************************************ 00:14:47.845 15:57:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:47.845 * Looking for test storage... 00:14:47.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.845 15:57:27 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.845 15:57:27 -- nvmf/common.sh@7 -- # uname -s 00:14:47.845 15:57:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.845 15:57:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.845 15:57:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.845 15:57:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.845 15:57:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.845 15:57:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.845 15:57:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.845 15:57:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.845 15:57:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.845 15:57:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.845 15:57:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:47.845 15:57:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:47.845 15:57:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.845 15:57:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.845 15:57:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.845 15:57:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.845 15:57:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.845 15:57:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.845 15:57:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.845 15:57:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.845 15:57:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.845 15:57:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.845 15:57:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.845 15:57:27 -- paths/export.sh@5 -- # export PATH 00:14:47.845 15:57:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.845 15:57:27 -- nvmf/common.sh@47 -- # : 0 00:14:47.845 15:57:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.845 15:57:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.845 15:57:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.846 15:57:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.846 15:57:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.846 15:57:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.846 15:57:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.846 15:57:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.846 15:57:27 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.846 15:57:27 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.846 15:57:27 -- target/host_management.sh@105 -- # nvmftestinit 00:14:47.846 15:57:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:47.846 15:57:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.846 15:57:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:47.846 15:57:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:47.846 15:57:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:47.846 15:57:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.846 15:57:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.846 15:57:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.846 15:57:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:47.846 15:57:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:47.846 15:57:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.846 15:57:27 -- common/autotest_common.sh@10 -- # set +x 00:14:53.123 15:57:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:53.123 15:57:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.123 15:57:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.123 15:57:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.123 15:57:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.123 15:57:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.123 15:57:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.123 15:57:32 -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.123 15:57:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.123 15:57:32 -- nvmf/common.sh@296 -- # e810=() 00:14:53.123 15:57:32 -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.123 15:57:32 -- nvmf/common.sh@297 -- # x722=() 00:14:53.123 15:57:32 -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.123 15:57:32 -- nvmf/common.sh@298 -- # mlx=() 00:14:53.123 15:57:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.123 15:57:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.123 15:57:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.123 15:57:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:53.123 15:57:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:53.123 15:57:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:53.123 15:57:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:53.123 15:57:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.123 15:57:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.123 15:57:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:53.123 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:53.123 15:57:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.123 15:57:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.123 15:57:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.123 15:57:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.124 15:57:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:53.124 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:53.124 15:57:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.124 15:57:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.124 15:57:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.124 15:57:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:53.124 15:57:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.124 15:57:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:53.124 Found net devices under 0000:86:00.0: cvl_0_0 00:14:53.124 15:57:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.124 15:57:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.124 15:57:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.124 15:57:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:53.124 15:57:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.124 15:57:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:53.124 Found net devices under 0000:86:00.1: cvl_0_1 00:14:53.124 15:57:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.124 15:57:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:53.124 15:57:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:53.124 15:57:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:53.124 15:57:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:53.124 15:57:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.124 15:57:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.124 15:57:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.124 15:57:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:53.124 15:57:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.124 15:57:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.124 15:57:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:53.124 15:57:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.124 15:57:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.124 15:57:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:53.124 15:57:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:53.124 15:57:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.124 15:57:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.383 15:57:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.383 15:57:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.383 15:57:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:53.383 15:57:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.383 15:57:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.383 15:57:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.383 15:57:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:53.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:53.383 00:14:53.383 --- 10.0.0.2 ping statistics --- 00:14:53.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.383 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:53.383 15:57:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:14:53.383 00:14:53.383 --- 10.0.0.1 ping statistics --- 00:14:53.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.383 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:53.383 15:57:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.383 15:57:32 -- nvmf/common.sh@411 -- # return 0 00:14:53.383 15:57:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:53.383 15:57:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.383 15:57:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:53.383 15:57:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:53.383 15:57:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.383 15:57:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:53.383 15:57:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:53.383 15:57:32 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:14:53.383 15:57:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:53.383 15:57:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.383 15:57:32 -- common/autotest_common.sh@10 -- # set +x 00:14:53.643 ************************************ 00:14:53.643 START TEST nvmf_host_management 00:14:53.643 ************************************ 00:14:53.643 15:57:33 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:14:53.643 15:57:33 -- target/host_management.sh@69 -- # starttarget 00:14:53.643 15:57:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:53.643 15:57:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:53.643 15:57:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:53.643 15:57:33 -- common/autotest_common.sh@10 -- # set +x 00:14:53.643 15:57:33 -- nvmf/common.sh@470 -- # nvmfpid=2403534 00:14:53.643 15:57:33 -- nvmf/common.sh@471 -- # waitforlisten 2403534 00:14:53.643 15:57:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:53.643 15:57:33 -- common/autotest_common.sh@817 -- # '[' -z 2403534 ']' 00:14:53.643 15:57:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.643 15:57:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:53.643 15:57:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.643 15:57:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:53.643 15:57:33 -- common/autotest_common.sh@10 -- # set +x 00:14:53.643 [2024-04-26 15:57:33.211496] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:53.643 [2024-04-26 15:57:33.211594] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.643 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.643 [2024-04-26 15:57:33.320833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.902 [2024-04-26 15:57:33.542568] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.902 [2024-04-26 15:57:33.542618] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.902 [2024-04-26 15:57:33.542628] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.902 [2024-04-26 15:57:33.542639] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.902 [2024-04-26 15:57:33.542647] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.902 [2024-04-26 15:57:33.542769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.902 [2024-04-26 15:57:33.542840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.902 [2024-04-26 15:57:33.542919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.902 [2024-04-26 15:57:33.542942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:54.470 15:57:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:54.470 15:57:33 -- common/autotest_common.sh@850 -- # return 0 00:14:54.470 15:57:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:54.470 15:57:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:54.470 15:57:33 -- common/autotest_common.sh@10 -- # set +x 00:14:54.470 15:57:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.470 15:57:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.470 15:57:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.470 15:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:54.470 [2024-04-26 15:57:34.025869] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.470 15:57:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.470 15:57:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:54.470 15:57:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:54.470 15:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:54.471 15:57:34 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:54.471 15:57:34 -- target/host_management.sh@23 -- # cat 00:14:54.471 15:57:34 -- target/host_management.sh@30 -- # rpc_cmd 00:14:54.471 15:57:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:54.471 15:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:54.471 Malloc0 00:14:54.730 [2024-04-26 15:57:34.157431] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.730 15:57:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:54.730 15:57:34 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:54.730 15:57:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:54.730 15:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:54.730 15:57:34 -- target/host_management.sh@73 -- # perfpid=2403742 00:14:54.730 15:57:34 -- target/host_management.sh@74 -- # waitforlisten 2403742 /var/tmp/bdevperf.sock 00:14:54.730 15:57:34 -- common/autotest_common.sh@817 -- # '[' -z 2403742 ']' 00:14:54.730 15:57:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.730 15:57:34 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:54.730 15:57:34 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:54.730 15:57:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:54.730 15:57:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.730 15:57:34 -- nvmf/common.sh@521 -- # config=() 00:14:54.730 15:57:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:54.730 15:57:34 -- nvmf/common.sh@521 -- # local subsystem config 00:14:54.730 15:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:54.730 15:57:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:54.730 15:57:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:54.730 { 00:14:54.730 "params": { 00:14:54.730 "name": "Nvme$subsystem", 00:14:54.730 "trtype": "$TEST_TRANSPORT", 00:14:54.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.730 "adrfam": "ipv4", 00:14:54.730 "trsvcid": "$NVMF_PORT", 00:14:54.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.730 "hdgst": ${hdgst:-false}, 00:14:54.730 "ddgst": ${ddgst:-false} 00:14:54.730 }, 00:14:54.730 "method": "bdev_nvme_attach_controller" 00:14:54.730 } 00:14:54.730 EOF 00:14:54.730 )") 00:14:54.730 15:57:34 -- nvmf/common.sh@543 -- # cat 00:14:54.730 15:57:34 -- nvmf/common.sh@545 -- # jq . 00:14:54.730 15:57:34 -- nvmf/common.sh@546 -- # IFS=, 00:14:54.730 15:57:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:54.730 "params": { 00:14:54.730 "name": "Nvme0", 00:14:54.730 "trtype": "tcp", 00:14:54.730 "traddr": "10.0.0.2", 00:14:54.730 "adrfam": "ipv4", 00:14:54.730 "trsvcid": "4420", 00:14:54.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:54.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:54.730 "hdgst": false, 00:14:54.730 "ddgst": false 00:14:54.730 }, 00:14:54.730 "method": "bdev_nvme_attach_controller" 00:14:54.730 }' 00:14:54.730 [2024-04-26 15:57:34.275903] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:54.730 [2024-04-26 15:57:34.275995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403742 ] 00:14:54.730 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.730 [2024-04-26 15:57:34.380808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.990 [2024-04-26 15:57:34.621094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.559 Running I/O for 10 seconds... 00:14:55.819 15:57:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:55.819 15:57:35 -- common/autotest_common.sh@850 -- # return 0 00:14:55.819 15:57:35 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:55.819 15:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:55.819 15:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:55.819 15:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:55.819 15:57:35 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.819 15:57:35 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:55.819 15:57:35 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:55.819 15:57:35 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:55.819 15:57:35 -- target/host_management.sh@52 -- # local ret=1 00:14:55.819 15:57:35 -- target/host_management.sh@53 -- # local i 00:14:55.819 15:57:35 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:55.819 15:57:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:55.819 15:57:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:55.819 15:57:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:55.819 15:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:55.819 15:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:55.819 15:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:55.819 15:57:35 -- target/host_management.sh@55 -- # read_io_count=3 00:14:55.819 15:57:35 -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:14:55.819 15:57:35 -- target/host_management.sh@62 -- # sleep 0.25 00:14:56.107 15:57:35 -- target/host_management.sh@54 -- # (( i-- )) 00:14:56.107 15:57:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:56.107 15:57:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:56.107 15:57:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:56.107 15:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.107 15:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.107 15:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.107 15:57:35 -- target/host_management.sh@55 -- # read_io_count=323 00:14:56.107 15:57:35 -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:14:56.107 15:57:35 -- target/host_management.sh@59 -- # ret=0 00:14:56.107 15:57:35 -- target/host_management.sh@60 -- # break 00:14:56.107 15:57:35 -- target/host_management.sh@64 -- # return 0 00:14:56.107 15:57:35 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:56.107 15:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.107 15:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.107 [2024-04-26 15:57:35.630452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:14:56.107 [2024-04-26 15:57:35.631314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.107 [2024-04-26 15:57:35.631754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.107 [2024-04-26 15:57:35.631764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.631977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.631990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.108 [2024-04-26 15:57:35.632534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.108 [2024-04-26 15:57:35.632544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.109 [2024-04-26 15:57:35.632871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.109 [2024-04-26 15:57:35.632882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007e40 is same with the state(5) to be set 00:14:56.109 [2024-04-26 15:57:35.633143] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:14:56.109 [2024-04-26 15:57:35.634146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:56.109 15:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.109 15:57:35 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:56.109 task offset: 53120 on job bdev=Nvme0n1 fails 00:14:56.109 00:14:56.109 Latency(us) 00:14:56.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.109 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:56.109 Job: Nvme0n1 ended in about 0.41 seconds with error 00:14:56.109 Verification LBA range: start 0x0 length 0x400 00:14:56.109 Nvme0n1 : 0.41 940.62 58.79 156.77 0.00 56847.04 2607.19 55164.22 00:14:56.109 =================================================================================================================== 00:14:56.109 Total : 940.62 58.79 156.77 0.00 56847.04 2607.19 55164.22 00:14:56.109 15:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:56.109 15:57:35 -- common/autotest_common.sh@10 -- # set +x 00:14:56.109 [2024-04-26 15:57:35.638801] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:56.109 [2024-04-26 15:57:35.638839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:14:56.109 15:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:56.109 15:57:35 -- target/host_management.sh@87 -- # sleep 1 00:14:56.109 [2024-04-26 15:57:35.691888] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:57.045 15:57:36 -- target/host_management.sh@91 -- # kill -9 2403742 00:14:57.045 15:57:36 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:57.045 15:57:36 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:57.045 15:57:36 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:57.045 15:57:36 -- nvmf/common.sh@521 -- # config=() 00:14:57.045 15:57:36 -- nvmf/common.sh@521 -- # local subsystem config 00:14:57.045 15:57:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:57.045 15:57:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:57.045 { 00:14:57.045 "params": { 00:14:57.045 "name": "Nvme$subsystem", 00:14:57.045 "trtype": "$TEST_TRANSPORT", 00:14:57.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:57.045 "adrfam": "ipv4", 00:14:57.045 "trsvcid": "$NVMF_PORT", 00:14:57.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:57.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:57.045 "hdgst": ${hdgst:-false}, 00:14:57.045 "ddgst": ${ddgst:-false} 00:14:57.045 }, 00:14:57.045 "method": "bdev_nvme_attach_controller" 00:14:57.045 } 00:14:57.045 EOF 00:14:57.045 )") 00:14:57.045 15:57:36 -- nvmf/common.sh@543 -- # cat 00:14:57.045 15:57:36 -- nvmf/common.sh@545 -- # jq . 00:14:57.045 15:57:36 -- nvmf/common.sh@546 -- # IFS=, 00:14:57.045 15:57:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:57.045 "params": { 00:14:57.045 "name": "Nvme0", 00:14:57.045 "trtype": "tcp", 00:14:57.045 "traddr": "10.0.0.2", 00:14:57.045 "adrfam": "ipv4", 00:14:57.045 "trsvcid": "4420", 00:14:57.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:57.045 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:57.045 "hdgst": false, 00:14:57.045 "ddgst": false 00:14:57.045 }, 00:14:57.045 "method": "bdev_nvme_attach_controller" 00:14:57.045 }' 00:14:57.045 [2024-04-26 15:57:36.724902] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:57.045 [2024-04-26 15:57:36.724990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2404216 ] 00:14:57.303 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.303 [2024-04-26 15:57:36.827941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.562 [2024-04-26 15:57:37.062081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.129 Running I/O for 1 seconds... 00:14:59.066 00:14:59.066 Latency(us) 00:14:59.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.066 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:59.066 Verification LBA range: start 0x0 length 0x400 00:14:59.066 Nvme0n1 : 1.01 1203.41 75.21 0.00 0.00 52425.85 10827.69 53568.56 00:14:59.066 =================================================================================================================== 00:14:59.066 Total : 1203.41 75.21 0.00 0.00 52425.85 10827.69 53568.56 00:15:00.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2403742 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:15:00.446 15:57:39 -- target/host_management.sh@102 -- # stoptarget 00:15:00.446 15:57:39 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:00.446 15:57:39 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:00.446 15:57:39 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:00.446 15:57:39 -- target/host_management.sh@40 -- # nvmftestfini 00:15:00.446 15:57:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:00.446 15:57:39 -- nvmf/common.sh@117 -- # sync 00:15:00.446 15:57:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:00.446 15:57:39 -- nvmf/common.sh@120 -- # set +e 00:15:00.446 15:57:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:00.446 15:57:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:00.446 rmmod nvme_tcp 00:15:00.446 rmmod nvme_fabrics 00:15:00.446 rmmod nvme_keyring 00:15:00.446 15:57:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.446 15:57:39 -- nvmf/common.sh@124 -- # set -e 00:15:00.446 15:57:39 -- nvmf/common.sh@125 -- # return 0 00:15:00.446 15:57:39 -- nvmf/common.sh@478 -- # '[' -n 2403534 ']' 00:15:00.446 15:57:39 -- nvmf/common.sh@479 -- # killprocess 2403534 00:15:00.446 15:57:39 -- common/autotest_common.sh@936 -- # '[' -z 2403534 ']' 00:15:00.446 15:57:39 -- common/autotest_common.sh@940 -- # kill -0 2403534 00:15:00.446 15:57:39 -- common/autotest_common.sh@941 -- # uname 00:15:00.446 15:57:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:00.446 15:57:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2403534 00:15:00.446 15:57:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:00.446 15:57:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:00.446 15:57:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2403534' 00:15:00.446 killing process with pid 2403534 00:15:00.446 15:57:39 -- common/autotest_common.sh@955 -- # kill 2403534 00:15:00.446 15:57:39 -- common/autotest_common.sh@960 -- # wait 2403534 00:15:01.825 [2024-04-26 15:57:41.276206] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:01.825 15:57:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:01.825 15:57:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:01.825 15:57:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:01.825 15:57:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.825 15:57:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.825 15:57:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.825 15:57:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.825 15:57:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.732 15:57:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:03.732 00:15:03.732 real 0m10.287s 00:15:03.732 user 0m34.543s 00:15:03.732 sys 0m1.431s 00:15:03.732 15:57:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:03.732 15:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:03.732 ************************************ 00:15:03.732 END TEST nvmf_host_management 00:15:03.732 ************************************ 00:15:03.990 15:57:43 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:03.990 00:15:03.990 real 0m16.162s 00:15:03.990 user 0m36.161s 00:15:03.990 sys 0m5.693s 00:15:03.990 15:57:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:03.990 15:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:03.990 ************************************ 00:15:03.990 END TEST nvmf_host_management 00:15:03.990 ************************************ 00:15:03.990 15:57:43 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:03.990 15:57:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:03.990 15:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:03.990 15:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:03.990 ************************************ 00:15:03.990 START TEST nvmf_lvol 00:15:03.990 ************************************ 00:15:03.990 15:57:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:04.249 * Looking for test storage... 00:15:04.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.249 15:57:43 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.249 15:57:43 -- nvmf/common.sh@7 -- # uname -s 00:15:04.249 15:57:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.249 15:57:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.249 15:57:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.249 15:57:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.249 15:57:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.249 15:57:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.249 15:57:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.249 15:57:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.249 15:57:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.249 15:57:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.249 15:57:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.249 15:57:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.249 15:57:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.249 15:57:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.249 15:57:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.249 15:57:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.249 15:57:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.249 15:57:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.249 15:57:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.249 15:57:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.249 15:57:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.249 15:57:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.249 15:57:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.249 15:57:43 -- paths/export.sh@5 -- # export PATH 00:15:04.249 15:57:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.249 15:57:43 -- nvmf/common.sh@47 -- # : 0 00:15:04.249 15:57:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.249 15:57:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.249 15:57:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.249 15:57:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.249 15:57:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.249 15:57:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.249 15:57:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.249 15:57:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.249 15:57:43 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.249 15:57:43 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:04.249 15:57:43 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:04.249 15:57:43 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:04.249 15:57:43 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.249 15:57:43 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:04.249 15:57:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:04.249 15:57:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.249 15:57:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:04.249 15:57:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:04.249 15:57:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:04.249 15:57:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.249 15:57:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.249 15:57:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.249 15:57:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:04.249 15:57:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:04.249 15:57:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.249 15:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:09.532 15:57:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:09.532 15:57:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.532 15:57:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.532 15:57:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.532 15:57:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.532 15:57:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.532 15:57:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.532 15:57:48 -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.532 15:57:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.532 15:57:48 -- nvmf/common.sh@296 -- # e810=() 00:15:09.532 15:57:48 -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.532 15:57:48 -- nvmf/common.sh@297 -- # x722=() 00:15:09.532 15:57:48 -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.532 15:57:48 -- nvmf/common.sh@298 -- # mlx=() 00:15:09.532 15:57:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.532 15:57:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.532 15:57:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.532 15:57:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:09.532 15:57:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.532 15:57:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.532 15:57:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:09.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:09.532 15:57:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.532 15:57:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:09.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:09.532 15:57:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.532 15:57:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.532 15:57:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.532 15:57:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:09.532 15:57:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.532 15:57:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:09.532 Found net devices under 0000:86:00.0: cvl_0_0 00:15:09.532 15:57:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.532 15:57:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.532 15:57:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.532 15:57:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:09.532 15:57:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.532 15:57:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:09.532 Found net devices under 0000:86:00.1: cvl_0_1 00:15:09.532 15:57:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.532 15:57:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:09.532 15:57:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:09.532 15:57:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:09.532 15:57:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:09.532 15:57:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.533 15:57:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.533 15:57:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.533 15:57:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:09.533 15:57:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.533 15:57:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.533 15:57:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:09.533 15:57:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.533 15:57:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.533 15:57:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:09.533 15:57:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:09.533 15:57:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.533 15:57:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.533 15:57:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.533 15:57:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.533 15:57:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:09.533 15:57:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.533 15:57:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.533 15:57:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.533 15:57:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:09.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:15:09.533 00:15:09.533 --- 10.0.0.2 ping statistics --- 00:15:09.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.533 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:15:09.533 15:57:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:15:09.533 00:15:09.533 --- 10.0.0.1 ping statistics --- 00:15:09.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.533 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:15:09.533 15:57:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.533 15:57:48 -- nvmf/common.sh@411 -- # return 0 00:15:09.533 15:57:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:09.533 15:57:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.533 15:57:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:09.533 15:57:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:09.533 15:57:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.533 15:57:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:09.533 15:57:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:09.533 15:57:48 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:09.533 15:57:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:09.533 15:57:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:09.533 15:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:09.533 15:57:48 -- nvmf/common.sh@470 -- # nvmfpid=2408450 00:15:09.533 15:57:48 -- nvmf/common.sh@471 -- # waitforlisten 2408450 00:15:09.533 15:57:48 -- common/autotest_common.sh@817 -- # '[' -z 2408450 ']' 00:15:09.533 15:57:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.533 15:57:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:09.533 15:57:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.533 15:57:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:09.533 15:57:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:09.533 15:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:09.533 [2024-04-26 15:57:49.053983] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:09.533 [2024-04-26 15:57:49.054068] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.533 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.533 [2024-04-26 15:57:49.162763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.792 [2024-04-26 15:57:49.377885] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.792 [2024-04-26 15:57:49.377931] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.792 [2024-04-26 15:57:49.377943] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.792 [2024-04-26 15:57:49.377954] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.792 [2024-04-26 15:57:49.377964] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.792 [2024-04-26 15:57:49.378034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.792 [2024-04-26 15:57:49.378121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.792 [2024-04-26 15:57:49.378128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.362 15:57:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:10.362 15:57:49 -- common/autotest_common.sh@850 -- # return 0 00:15:10.362 15:57:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:10.362 15:57:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:10.362 15:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:10.362 15:57:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.362 15:57:49 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:10.362 [2024-04-26 15:57:50.030657] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.621 15:57:50 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:10.880 15:57:50 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:10.880 15:57:50 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:11.138 15:57:50 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:11.138 15:57:50 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:11.138 15:57:50 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:11.397 15:57:50 -- target/nvmf_lvol.sh@29 -- # lvs=a16a2691-55c9-45bf-ac38-975ddc79c22b 00:15:11.397 15:57:50 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a16a2691-55c9-45bf-ac38-975ddc79c22b lvol 20 00:15:11.656 15:57:51 -- target/nvmf_lvol.sh@32 -- # lvol=572370ca-88b5-46e1-9526-be562bdcb675 00:15:11.656 15:57:51 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:11.656 15:57:51 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 572370ca-88b5-46e1-9526-be562bdcb675 00:15:11.914 15:57:51 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:12.173 [2024-04-26 15:57:51.681001] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.174 15:57:51 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.432 15:57:51 -- target/nvmf_lvol.sh@42 -- # perf_pid=2408948 00:15:12.432 15:57:51 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:12.432 15:57:51 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:12.432 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.370 15:57:52 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 572370ca-88b5-46e1-9526-be562bdcb675 MY_SNAPSHOT 00:15:13.629 15:57:53 -- target/nvmf_lvol.sh@47 -- # snapshot=dd4fed1e-0755-4c8d-995a-e7625e89c815 00:15:13.629 15:57:53 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 572370ca-88b5-46e1-9526-be562bdcb675 30 00:15:13.908 15:57:53 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone dd4fed1e-0755-4c8d-995a-e7625e89c815 MY_CLONE 00:15:13.908 15:57:53 -- target/nvmf_lvol.sh@49 -- # clone=69b81cd2-8f3d-499c-8dfb-31d37d0a3684 00:15:13.908 15:57:53 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 69b81cd2-8f3d-499c-8dfb-31d37d0a3684 00:15:14.514 15:57:54 -- target/nvmf_lvol.sh@53 -- # wait 2408948 00:15:24.500 Initializing NVMe Controllers 00:15:24.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:24.500 Controller IO queue size 128, less than required. 00:15:24.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:24.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:24.500 Initialization complete. Launching workers. 00:15:24.500 ======================================================== 00:15:24.500 Latency(us) 00:15:24.500 Device Information : IOPS MiB/s Average min max 00:15:24.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10708.78 41.83 11956.69 544.06 207750.69 00:15:24.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10467.48 40.89 12229.85 3611.95 136605.69 00:15:24.500 ======================================================== 00:15:24.500 Total : 21176.26 82.72 12091.71 544.06 207750.69 00:15:24.500 00:15:24.500 15:58:02 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:24.500 15:58:02 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 572370ca-88b5-46e1-9526-be562bdcb675 00:15:24.500 15:58:02 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a16a2691-55c9-45bf-ac38-975ddc79c22b 00:15:24.500 15:58:02 -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:24.500 15:58:02 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:24.500 15:58:02 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:24.500 15:58:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:24.500 15:58:02 -- nvmf/common.sh@117 -- # sync 00:15:24.500 15:58:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.500 15:58:02 -- nvmf/common.sh@120 -- # set +e 00:15:24.500 15:58:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.500 15:58:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.500 rmmod nvme_tcp 00:15:24.500 rmmod nvme_fabrics 00:15:24.500 rmmod nvme_keyring 00:15:24.500 15:58:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.500 15:58:02 -- nvmf/common.sh@124 -- # set -e 00:15:24.500 15:58:02 -- nvmf/common.sh@125 -- # return 0 00:15:24.500 15:58:02 -- nvmf/common.sh@478 -- # '[' -n 2408450 ']' 00:15:24.500 15:58:02 -- nvmf/common.sh@479 -- # killprocess 2408450 00:15:24.500 15:58:03 -- common/autotest_common.sh@936 -- # '[' -z 2408450 ']' 00:15:24.500 15:58:03 -- common/autotest_common.sh@940 -- # kill -0 2408450 00:15:24.500 15:58:03 -- common/autotest_common.sh@941 -- # uname 00:15:24.500 15:58:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.500 15:58:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2408450 00:15:24.500 15:58:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:24.500 15:58:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:24.500 15:58:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2408450' 00:15:24.500 killing process with pid 2408450 00:15:24.500 15:58:03 -- common/autotest_common.sh@955 -- # kill 2408450 00:15:24.500 15:58:03 -- common/autotest_common.sh@960 -- # wait 2408450 00:15:25.069 15:58:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:25.069 15:58:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:25.069 15:58:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:25.069 15:58:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.069 15:58:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.069 15:58:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.069 15:58:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.069 15:58:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.606 15:58:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.606 00:15:27.606 real 0m23.154s 00:15:27.606 user 1m7.635s 00:15:27.606 sys 0m6.783s 00:15:27.606 15:58:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:27.606 15:58:06 -- common/autotest_common.sh@10 -- # set +x 00:15:27.606 ************************************ 00:15:27.606 END TEST nvmf_lvol 00:15:27.606 ************************************ 00:15:27.606 15:58:06 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:27.606 15:58:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:27.606 15:58:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.606 15:58:06 -- common/autotest_common.sh@10 -- # set +x 00:15:27.606 ************************************ 00:15:27.606 START TEST nvmf_lvs_grow 00:15:27.606 ************************************ 00:15:27.606 15:58:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:27.606 * Looking for test storage... 00:15:27.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.606 15:58:07 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.606 15:58:07 -- nvmf/common.sh@7 -- # uname -s 00:15:27.606 15:58:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.606 15:58:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.606 15:58:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.606 15:58:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.606 15:58:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.606 15:58:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.606 15:58:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.606 15:58:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.606 15:58:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.606 15:58:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.606 15:58:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.606 15:58:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.606 15:58:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.606 15:58:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.606 15:58:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.606 15:58:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.606 15:58:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.606 15:58:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.606 15:58:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.606 15:58:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.606 15:58:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.606 15:58:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.606 15:58:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.606 15:58:07 -- paths/export.sh@5 -- # export PATH 00:15:27.606 15:58:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.606 15:58:07 -- nvmf/common.sh@47 -- # : 0 00:15:27.606 15:58:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.606 15:58:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.606 15:58:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.606 15:58:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.606 15:58:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.606 15:58:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.606 15:58:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.606 15:58:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.606 15:58:07 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.606 15:58:07 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:27.606 15:58:07 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:15:27.606 15:58:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:27.606 15:58:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.606 15:58:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:27.606 15:58:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:27.606 15:58:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:27.606 15:58:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.606 15:58:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.606 15:58:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.606 15:58:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:27.606 15:58:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:27.606 15:58:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.606 15:58:07 -- common/autotest_common.sh@10 -- # set +x 00:15:32.885 15:58:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:32.885 15:58:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.885 15:58:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.885 15:58:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.885 15:58:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.885 15:58:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.885 15:58:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.885 15:58:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.885 15:58:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.885 15:58:12 -- nvmf/common.sh@296 -- # e810=() 00:15:32.885 15:58:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.885 15:58:12 -- nvmf/common.sh@297 -- # x722=() 00:15:32.885 15:58:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.885 15:58:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:32.885 15:58:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.885 15:58:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.885 15:58:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.885 15:58:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.885 15:58:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.885 15:58:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.885 15:58:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:32.885 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:32.885 15:58:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.885 15:58:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:32.885 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:32.885 15:58:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.885 15:58:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.885 15:58:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.885 15:58:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:32.885 15:58:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.885 15:58:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:32.885 Found net devices under 0000:86:00.0: cvl_0_0 00:15:32.885 15:58:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.885 15:58:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.885 15:58:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.885 15:58:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:32.885 15:58:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.885 15:58:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:32.885 Found net devices under 0000:86:00.1: cvl_0_1 00:15:32.885 15:58:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.885 15:58:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:32.885 15:58:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:32.885 15:58:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:32.885 15:58:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:32.885 15:58:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.885 15:58:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.885 15:58:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.885 15:58:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.885 15:58:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.885 15:58:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.885 15:58:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.885 15:58:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.885 15:58:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.885 15:58:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.885 15:58:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.885 15:58:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.885 15:58:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.145 15:58:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.145 15:58:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.145 15:58:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.145 15:58:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.145 15:58:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.145 15:58:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.145 15:58:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:15:33.145 00:15:33.145 --- 10.0.0.2 ping statistics --- 00:15:33.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.145 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:15:33.145 15:58:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:15:33.145 00:15:33.145 --- 10.0.0.1 ping statistics --- 00:15:33.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.145 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:15:33.145 15:58:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.145 15:58:12 -- nvmf/common.sh@411 -- # return 0 00:15:33.145 15:58:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:33.145 15:58:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.145 15:58:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:33.145 15:58:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:33.145 15:58:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.145 15:58:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:33.145 15:58:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:33.405 15:58:12 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:15:33.405 15:58:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:33.405 15:58:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:33.405 15:58:12 -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 15:58:12 -- nvmf/common.sh@470 -- # nvmfpid=2414545 00:15:33.405 15:58:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:33.405 15:58:12 -- nvmf/common.sh@471 -- # waitforlisten 2414545 00:15:33.405 15:58:12 -- common/autotest_common.sh@817 -- # '[' -z 2414545 ']' 00:15:33.405 15:58:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.405 15:58:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:33.405 15:58:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.405 15:58:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:33.405 15:58:12 -- common/autotest_common.sh@10 -- # set +x 00:15:33.405 [2024-04-26 15:58:12.913330] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:33.405 [2024-04-26 15:58:12.913428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.405 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.405 [2024-04-26 15:58:13.021235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.664 [2024-04-26 15:58:13.250630] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.664 [2024-04-26 15:58:13.250678] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.664 [2024-04-26 15:58:13.250688] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.664 [2024-04-26 15:58:13.250699] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.664 [2024-04-26 15:58:13.250710] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.664 [2024-04-26 15:58:13.250745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.256 15:58:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:34.256 15:58:13 -- common/autotest_common.sh@850 -- # return 0 00:15:34.256 15:58:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:34.256 15:58:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:34.256 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:15:34.256 15:58:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.256 15:58:13 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:34.256 [2024-04-26 15:58:13.872704] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.256 15:58:13 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:15:34.256 15:58:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:34.256 15:58:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.256 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:15:34.515 ************************************ 00:15:34.515 START TEST lvs_grow_clean 00:15:34.515 ************************************ 00:15:34.515 15:58:14 -- common/autotest_common.sh@1111 -- # lvs_grow 00:15:34.515 15:58:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:34.516 15:58:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:34.516 15:58:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:34.516 15:58:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:34.516 15:58:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:34.516 15:58:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:34.516 15:58:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:34.516 15:58:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:34.516 15:58:14 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:34.775 15:58:14 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:34.775 15:58:14 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:34.775 15:58:14 -- target/nvmf_lvs_grow.sh@28 -- # lvs=06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:34.775 15:58:14 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:34.775 15:58:14 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:35.034 15:58:14 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:35.034 15:58:14 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:35.034 15:58:14 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 lvol 150 00:15:35.293 15:58:14 -- target/nvmf_lvs_grow.sh@33 -- # lvol=58bb0839-aaeb-4343-b12b-3d68536be798 00:15:35.293 15:58:14 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:35.293 15:58:14 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:35.293 [2024-04-26 15:58:14.879119] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:35.293 [2024-04-26 15:58:14.879198] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:35.293 true 00:15:35.293 15:58:14 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:35.293 15:58:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:35.552 15:58:15 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:35.552 15:58:15 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:35.811 15:58:15 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 58bb0839-aaeb-4343-b12b-3d68536be798 00:15:35.811 15:58:15 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:36.070 [2024-04-26 15:58:15.541219] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.070 15:58:15 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:36.070 15:58:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2415056 00:15:36.070 15:58:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:36.070 15:58:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2415056 /var/tmp/bdevperf.sock 00:15:36.070 15:58:15 -- common/autotest_common.sh@817 -- # '[' -z 2415056 ']' 00:15:36.070 15:58:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:36.070 15:58:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.070 15:58:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:36.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:36.070 15:58:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.070 15:58:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.070 15:58:15 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:36.329 [2024-04-26 15:58:15.775331] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:36.329 [2024-04-26 15:58:15.775418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415056 ] 00:15:36.329 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.329 [2024-04-26 15:58:15.878516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.589 [2024-04-26 15:58:16.103330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.849 15:58:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:36.849 15:58:16 -- common/autotest_common.sh@850 -- # return 0 00:15:36.849 15:58:16 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:37.417 Nvme0n1 00:15:37.417 15:58:16 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:37.417 [ 00:15:37.417 { 00:15:37.417 "name": "Nvme0n1", 00:15:37.417 "aliases": [ 00:15:37.417 "58bb0839-aaeb-4343-b12b-3d68536be798" 00:15:37.417 ], 00:15:37.417 "product_name": "NVMe disk", 00:15:37.417 "block_size": 4096, 00:15:37.417 "num_blocks": 38912, 00:15:37.417 "uuid": "58bb0839-aaeb-4343-b12b-3d68536be798", 00:15:37.417 "assigned_rate_limits": { 00:15:37.417 "rw_ios_per_sec": 0, 00:15:37.417 "rw_mbytes_per_sec": 0, 00:15:37.417 "r_mbytes_per_sec": 0, 00:15:37.417 "w_mbytes_per_sec": 0 00:15:37.417 }, 00:15:37.417 "claimed": false, 00:15:37.417 "zoned": false, 00:15:37.417 "supported_io_types": { 00:15:37.417 "read": true, 00:15:37.418 "write": true, 00:15:37.418 "unmap": true, 00:15:37.418 "write_zeroes": true, 00:15:37.418 "flush": true, 00:15:37.418 "reset": true, 00:15:37.418 "compare": true, 00:15:37.418 "compare_and_write": true, 00:15:37.418 "abort": true, 00:15:37.418 "nvme_admin": true, 00:15:37.418 "nvme_io": true 00:15:37.418 }, 00:15:37.418 "memory_domains": [ 00:15:37.418 { 00:15:37.418 "dma_device_id": "system", 00:15:37.418 "dma_device_type": 1 00:15:37.418 } 00:15:37.418 ], 00:15:37.418 "driver_specific": { 00:15:37.418 "nvme": [ 00:15:37.418 { 00:15:37.418 "trid": { 00:15:37.418 "trtype": "TCP", 00:15:37.418 "adrfam": "IPv4", 00:15:37.418 "traddr": "10.0.0.2", 00:15:37.418 "trsvcid": "4420", 00:15:37.418 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:37.418 }, 00:15:37.418 "ctrlr_data": { 00:15:37.418 "cntlid": 1, 00:15:37.418 "vendor_id": "0x8086", 00:15:37.418 "model_number": "SPDK bdev Controller", 00:15:37.418 "serial_number": "SPDK0", 00:15:37.418 "firmware_revision": "24.05", 00:15:37.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:37.418 "oacs": { 00:15:37.418 "security": 0, 00:15:37.418 "format": 0, 00:15:37.418 "firmware": 0, 00:15:37.418 "ns_manage": 0 00:15:37.418 }, 00:15:37.418 "multi_ctrlr": true, 00:15:37.418 "ana_reporting": false 00:15:37.418 }, 00:15:37.418 "vs": { 00:15:37.418 "nvme_version": "1.3" 00:15:37.418 }, 00:15:37.418 "ns_data": { 00:15:37.418 "id": 1, 00:15:37.418 "can_share": true 00:15:37.418 } 00:15:37.418 } 00:15:37.418 ], 00:15:37.418 "mp_policy": "active_passive" 00:15:37.418 } 00:15:37.418 } 00:15:37.418 ] 00:15:37.418 15:58:17 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2415290 00:15:37.418 15:58:17 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:37.418 15:58:17 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:37.677 Running I/O for 10 seconds... 00:15:38.616 Latency(us) 00:15:38.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.616 Nvme0n1 : 1.00 19019.00 74.29 0.00 0.00 0.00 0.00 0.00 00:15:38.616 =================================================================================================================== 00:15:38.616 Total : 19019.00 74.29 0.00 0.00 0.00 0.00 0.00 00:15:38.616 00:15:39.552 15:58:19 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:39.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:39.552 Nvme0n1 : 2.00 19243.00 75.17 0.00 0.00 0.00 0.00 0.00 00:15:39.552 =================================================================================================================== 00:15:39.552 Total : 19243.00 75.17 0.00 0.00 0.00 0.00 0.00 00:15:39.552 00:15:39.810 true 00:15:39.810 15:58:19 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:39.810 15:58:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:39.810 15:58:19 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:39.810 15:58:19 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:39.810 15:58:19 -- target/nvmf_lvs_grow.sh@65 -- # wait 2415290 00:15:40.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.745 Nvme0n1 : 3.00 19228.67 75.11 0.00 0.00 0.00 0.00 0.00 00:15:40.745 =================================================================================================================== 00:15:40.745 Total : 19228.67 75.11 0.00 0.00 0.00 0.00 0.00 00:15:40.745 00:15:41.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.680 Nvme0n1 : 4.00 19221.50 75.08 0.00 0.00 0.00 0.00 0.00 00:15:41.680 =================================================================================================================== 00:15:41.680 Total : 19221.50 75.08 0.00 0.00 0.00 0.00 0.00 00:15:41.680 00:15:42.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.616 Nvme0n1 : 5.00 19294.00 75.37 0.00 0.00 0.00 0.00 0.00 00:15:42.616 =================================================================================================================== 00:15:42.616 Total : 19294.00 75.37 0.00 0.00 0.00 0.00 0.00 00:15:42.616 00:15:43.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.553 Nvme0n1 : 6.00 19345.83 75.57 0.00 0.00 0.00 0.00 0.00 00:15:43.553 =================================================================================================================== 00:15:43.553 Total : 19345.83 75.57 0.00 0.00 0.00 0.00 0.00 00:15:43.553 00:15:44.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.486 Nvme0n1 : 7.00 19361.57 75.63 0.00 0.00 0.00 0.00 0.00 00:15:44.486 =================================================================================================================== 00:15:44.486 Total : 19361.57 75.63 0.00 0.00 0.00 0.00 0.00 00:15:44.486 00:15:45.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.864 Nvme0n1 : 8.00 19389.25 75.74 0.00 0.00 0.00 0.00 0.00 00:15:45.864 =================================================================================================================== 00:15:45.864 Total : 19389.25 75.74 0.00 0.00 0.00 0.00 0.00 00:15:45.864 00:15:46.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.799 Nvme0n1 : 9.00 19410.89 75.82 0.00 0.00 0.00 0.00 0.00 00:15:46.799 =================================================================================================================== 00:15:46.799 Total : 19410.89 75.82 0.00 0.00 0.00 0.00 0.00 00:15:46.799 00:15:47.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:47.732 Nvme0n1 : 10.00 19428.20 75.89 0.00 0.00 0.00 0.00 0.00 00:15:47.732 =================================================================================================================== 00:15:47.732 Total : 19428.20 75.89 0.00 0.00 0.00 0.00 0.00 00:15:47.732 00:15:47.732 00:15:47.732 Latency(us) 00:15:47.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:47.732 Nvme0n1 : 10.00 19432.30 75.91 0.00 0.00 6583.21 3761.20 19147.91 00:15:47.732 =================================================================================================================== 00:15:47.732 Total : 19432.30 75.91 0.00 0.00 6583.21 3761.20 19147.91 00:15:47.732 0 00:15:47.732 15:58:27 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2415056 00:15:47.732 15:58:27 -- common/autotest_common.sh@936 -- # '[' -z 2415056 ']' 00:15:47.732 15:58:27 -- common/autotest_common.sh@940 -- # kill -0 2415056 00:15:47.732 15:58:27 -- common/autotest_common.sh@941 -- # uname 00:15:47.732 15:58:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.732 15:58:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2415056 00:15:47.732 15:58:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:47.732 15:58:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:47.732 15:58:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2415056' 00:15:47.732 killing process with pid 2415056 00:15:47.732 15:58:27 -- common/autotest_common.sh@955 -- # kill 2415056 00:15:47.732 Received shutdown signal, test time was about 10.000000 seconds 00:15:47.732 00:15:47.732 Latency(us) 00:15:47.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.732 =================================================================================================================== 00:15:47.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.732 15:58:27 -- common/autotest_common.sh@960 -- # wait 2415056 00:15:48.666 15:58:28 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:48.924 15:58:28 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:48.924 15:58:28 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:49.183 15:58:28 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:49.184 15:58:28 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:49.184 15:58:28 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:49.184 [2024-04-26 15:58:28.775819] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:49.184 15:58:28 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:49.184 15:58:28 -- common/autotest_common.sh@638 -- # local es=0 00:15:49.184 15:58:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:49.184 15:58:28 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.184 15:58:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:49.184 15:58:28 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.184 15:58:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:49.184 15:58:28 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.184 15:58:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:49.184 15:58:28 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.184 15:58:28 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:49.184 15:58:28 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:49.442 request: 00:15:49.442 { 00:15:49.443 "uuid": "06f2bc37-1bcc-434f-90da-07b580bf6de6", 00:15:49.443 "method": "bdev_lvol_get_lvstores", 00:15:49.443 "req_id": 1 00:15:49.443 } 00:15:49.443 Got JSON-RPC error response 00:15:49.443 response: 00:15:49.443 { 00:15:49.443 "code": -19, 00:15:49.443 "message": "No such device" 00:15:49.443 } 00:15:49.443 15:58:28 -- common/autotest_common.sh@641 -- # es=1 00:15:49.443 15:58:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:49.443 15:58:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:49.443 15:58:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:49.443 15:58:28 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:49.702 aio_bdev 00:15:49.702 15:58:29 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 58bb0839-aaeb-4343-b12b-3d68536be798 00:15:49.702 15:58:29 -- common/autotest_common.sh@885 -- # local bdev_name=58bb0839-aaeb-4343-b12b-3d68536be798 00:15:49.702 15:58:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:49.702 15:58:29 -- common/autotest_common.sh@887 -- # local i 00:15:49.702 15:58:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:49.702 15:58:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:49.702 15:58:29 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:49.702 15:58:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 58bb0839-aaeb-4343-b12b-3d68536be798 -t 2000 00:15:49.960 [ 00:15:49.960 { 00:15:49.960 "name": "58bb0839-aaeb-4343-b12b-3d68536be798", 00:15:49.960 "aliases": [ 00:15:49.960 "lvs/lvol" 00:15:49.960 ], 00:15:49.960 "product_name": "Logical Volume", 00:15:49.960 "block_size": 4096, 00:15:49.960 "num_blocks": 38912, 00:15:49.960 "uuid": "58bb0839-aaeb-4343-b12b-3d68536be798", 00:15:49.960 "assigned_rate_limits": { 00:15:49.960 "rw_ios_per_sec": 0, 00:15:49.960 "rw_mbytes_per_sec": 0, 00:15:49.960 "r_mbytes_per_sec": 0, 00:15:49.960 "w_mbytes_per_sec": 0 00:15:49.960 }, 00:15:49.960 "claimed": false, 00:15:49.960 "zoned": false, 00:15:49.961 "supported_io_types": { 00:15:49.961 "read": true, 00:15:49.961 "write": true, 00:15:49.961 "unmap": true, 00:15:49.961 "write_zeroes": true, 00:15:49.961 "flush": false, 00:15:49.961 "reset": true, 00:15:49.961 "compare": false, 00:15:49.961 "compare_and_write": false, 00:15:49.961 "abort": false, 00:15:49.961 "nvme_admin": false, 00:15:49.961 "nvme_io": false 00:15:49.961 }, 00:15:49.961 "driver_specific": { 00:15:49.961 "lvol": { 00:15:49.961 "lvol_store_uuid": "06f2bc37-1bcc-434f-90da-07b580bf6de6", 00:15:49.961 "base_bdev": "aio_bdev", 00:15:49.961 "thin_provision": false, 00:15:49.961 "snapshot": false, 00:15:49.961 "clone": false, 00:15:49.961 "esnap_clone": false 00:15:49.961 } 00:15:49.961 } 00:15:49.961 } 00:15:49.961 ] 00:15:49.961 15:58:29 -- common/autotest_common.sh@893 -- # return 0 00:15:49.961 15:58:29 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:49.961 15:58:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:50.219 15:58:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:50.219 15:58:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:50.219 15:58:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:50.219 15:58:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:50.219 15:58:29 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 58bb0839-aaeb-4343-b12b-3d68536be798 00:15:50.478 15:58:30 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 06f2bc37-1bcc-434f-90da-07b580bf6de6 00:15:50.737 15:58:30 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:50.737 15:58:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:50.737 00:15:50.737 real 0m16.398s 00:15:50.737 user 0m16.015s 00:15:50.737 sys 0m1.453s 00:15:50.737 15:58:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:50.737 15:58:30 -- common/autotest_common.sh@10 -- # set +x 00:15:50.737 ************************************ 00:15:50.737 END TEST lvs_grow_clean 00:15:50.737 ************************************ 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:50.996 15:58:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:50.996 15:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.996 15:58:30 -- common/autotest_common.sh@10 -- # set +x 00:15:50.996 ************************************ 00:15:50.996 START TEST lvs_grow_dirty 00:15:50.996 ************************************ 00:15:50.996 15:58:30 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:50.996 15:58:30 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:51.255 15:58:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:51.255 15:58:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:51.514 15:58:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f17266b9-358c-4544-8332-5ad81add6c51 00:15:51.514 15:58:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:15:51.514 15:58:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:51.514 15:58:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:51.514 15:58:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:51.514 15:58:31 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f17266b9-358c-4544-8332-5ad81add6c51 lvol 150 00:15:51.773 15:58:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=90078a33-2491-445b-b618-64c6b2a5a3d5 00:15:51.773 15:58:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:51.773 15:58:31 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:52.032 [2024-04-26 15:58:31.456599] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:52.032 [2024-04-26 15:58:31.456675] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:52.032 true 00:15:52.032 15:58:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:15:52.032 15:58:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:52.032 15:58:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:52.032 15:58:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:52.291 15:58:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 90078a33-2491-445b-b618-64c6b2a5a3d5 00:15:52.291 15:58:31 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:52.550 15:58:32 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:52.809 15:58:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2417876 00:15:52.809 15:58:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:52.809 15:58:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2417876 /var/tmp/bdevperf.sock 00:15:52.809 15:58:32 -- common/autotest_common.sh@817 -- # '[' -z 2417876 ']' 00:15:52.809 15:58:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:52.809 15:58:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.809 15:58:32 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:52.809 15:58:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:52.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:52.809 15:58:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.809 15:58:32 -- common/autotest_common.sh@10 -- # set +x 00:15:52.809 [2024-04-26 15:58:32.372535] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:52.809 [2024-04-26 15:58:32.372630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417876 ] 00:15:52.809 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.809 [2024-04-26 15:58:32.476801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.067 [2024-04-26 15:58:32.701895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.636 15:58:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:53.636 15:58:33 -- common/autotest_common.sh@850 -- # return 0 00:15:53.636 15:58:33 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:53.894 Nvme0n1 00:15:53.894 15:58:33 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:54.153 [ 00:15:54.153 { 00:15:54.153 "name": "Nvme0n1", 00:15:54.153 "aliases": [ 00:15:54.153 "90078a33-2491-445b-b618-64c6b2a5a3d5" 00:15:54.153 ], 00:15:54.153 "product_name": "NVMe disk", 00:15:54.153 "block_size": 4096, 00:15:54.153 "num_blocks": 38912, 00:15:54.153 "uuid": "90078a33-2491-445b-b618-64c6b2a5a3d5", 00:15:54.153 "assigned_rate_limits": { 00:15:54.153 "rw_ios_per_sec": 0, 00:15:54.153 "rw_mbytes_per_sec": 0, 00:15:54.153 "r_mbytes_per_sec": 0, 00:15:54.153 "w_mbytes_per_sec": 0 00:15:54.153 }, 00:15:54.153 "claimed": false, 00:15:54.153 "zoned": false, 00:15:54.153 "supported_io_types": { 00:15:54.153 "read": true, 00:15:54.153 "write": true, 00:15:54.153 "unmap": true, 00:15:54.153 "write_zeroes": true, 00:15:54.153 "flush": true, 00:15:54.153 "reset": true, 00:15:54.153 "compare": true, 00:15:54.153 "compare_and_write": true, 00:15:54.153 "abort": true, 00:15:54.153 "nvme_admin": true, 00:15:54.153 "nvme_io": true 00:15:54.153 }, 00:15:54.153 "memory_domains": [ 00:15:54.153 { 00:15:54.153 "dma_device_id": "system", 00:15:54.153 "dma_device_type": 1 00:15:54.153 } 00:15:54.153 ], 00:15:54.153 "driver_specific": { 00:15:54.153 "nvme": [ 00:15:54.153 { 00:15:54.153 "trid": { 00:15:54.153 "trtype": "TCP", 00:15:54.153 "adrfam": "IPv4", 00:15:54.153 "traddr": "10.0.0.2", 00:15:54.153 "trsvcid": "4420", 00:15:54.153 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:54.153 }, 00:15:54.153 "ctrlr_data": { 00:15:54.153 "cntlid": 1, 00:15:54.153 "vendor_id": "0x8086", 00:15:54.153 "model_number": "SPDK bdev Controller", 00:15:54.153 "serial_number": "SPDK0", 00:15:54.153 "firmware_revision": "24.05", 00:15:54.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:54.153 "oacs": { 00:15:54.153 "security": 0, 00:15:54.153 "format": 0, 00:15:54.153 "firmware": 0, 00:15:54.153 "ns_manage": 0 00:15:54.153 }, 00:15:54.153 "multi_ctrlr": true, 00:15:54.153 "ana_reporting": false 00:15:54.153 }, 00:15:54.153 "vs": { 00:15:54.153 "nvme_version": "1.3" 00:15:54.153 }, 00:15:54.153 "ns_data": { 00:15:54.153 "id": 1, 00:15:54.153 "can_share": true 00:15:54.153 } 00:15:54.153 } 00:15:54.153 ], 00:15:54.153 "mp_policy": "active_passive" 00:15:54.153 } 00:15:54.153 } 00:15:54.153 ] 00:15:54.153 15:58:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2418108 00:15:54.153 15:58:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:54.153 15:58:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:54.153 Running I/O for 10 seconds... 00:15:55.176 Latency(us) 00:15:55.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.176 Nvme0n1 : 1.00 18139.00 70.86 0.00 0.00 0.00 0.00 0.00 00:15:55.176 =================================================================================================================== 00:15:55.176 Total : 18139.00 70.86 0.00 0.00 0.00 0.00 0.00 00:15:55.176 00:15:56.137 15:58:35 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f17266b9-358c-4544-8332-5ad81add6c51 00:15:56.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:56.137 Nvme0n1 : 2.00 18273.50 71.38 0.00 0.00 0.00 0.00 0.00 00:15:56.137 =================================================================================================================== 00:15:56.137 Total : 18273.50 71.38 0.00 0.00 0.00 0.00 0.00 00:15:56.137 00:15:56.396 true 00:15:56.396 15:58:35 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:15:56.396 15:58:35 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:56.396 15:58:36 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:56.396 15:58:36 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:56.396 15:58:36 -- target/nvmf_lvs_grow.sh@65 -- # wait 2418108 00:15:57.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.332 Nvme0n1 : 3.00 18262.33 71.34 0.00 0.00 0.00 0.00 0.00 00:15:57.332 =================================================================================================================== 00:15:57.332 Total : 18262.33 71.34 0.00 0.00 0.00 0.00 0.00 00:15:57.332 00:15:58.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.267 Nvme0n1 : 4.00 18350.75 71.68 0.00 0.00 0.00 0.00 0.00 00:15:58.267 =================================================================================================================== 00:15:58.267 Total : 18350.75 71.68 0.00 0.00 0.00 0.00 0.00 00:15:58.267 00:15:59.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.205 Nvme0n1 : 5.00 18407.00 71.90 0.00 0.00 0.00 0.00 0.00 00:15:59.205 =================================================================================================================== 00:15:59.205 Total : 18407.00 71.90 0.00 0.00 0.00 0.00 0.00 00:15:59.205 00:16:00.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.144 Nvme0n1 : 6.00 18449.83 72.07 0.00 0.00 0.00 0.00 0.00 00:16:00.144 =================================================================================================================== 00:16:00.144 Total : 18449.83 72.07 0.00 0.00 0.00 0.00 0.00 00:16:00.144 00:16:01.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.521 Nvme0n1 : 7.00 18429.00 71.99 0.00 0.00 0.00 0.00 0.00 00:16:01.521 =================================================================================================================== 00:16:01.521 Total : 18429.00 71.99 0.00 0.00 0.00 0.00 0.00 00:16:01.521 00:16:02.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.457 Nvme0n1 : 8.00 18464.38 72.13 0.00 0.00 0.00 0.00 0.00 00:16:02.457 =================================================================================================================== 00:16:02.457 Total : 18464.38 72.13 0.00 0.00 0.00 0.00 0.00 00:16:02.457 00:16:03.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.392 Nvme0n1 : 9.00 18499.00 72.26 0.00 0.00 0.00 0.00 0.00 00:16:03.392 =================================================================================================================== 00:16:03.392 Total : 18499.00 72.26 0.00 0.00 0.00 0.00 0.00 00:16:03.392 00:16:04.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.351 Nvme0n1 : 10.00 18527.50 72.37 0.00 0.00 0.00 0.00 0.00 00:16:04.351 =================================================================================================================== 00:16:04.351 Total : 18527.50 72.37 0.00 0.00 0.00 0.00 0.00 00:16:04.351 00:16:04.351 00:16:04.351 Latency(us) 00:16:04.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.351 Nvme0n1 : 10.01 18527.57 72.37 0.00 0.00 6903.64 4302.58 17324.30 00:16:04.351 =================================================================================================================== 00:16:04.351 Total : 18527.57 72.37 0.00 0.00 6903.64 4302.58 17324.30 00:16:04.351 0 00:16:04.351 15:58:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2417876 00:16:04.351 15:58:43 -- common/autotest_common.sh@936 -- # '[' -z 2417876 ']' 00:16:04.351 15:58:43 -- common/autotest_common.sh@940 -- # kill -0 2417876 00:16:04.351 15:58:43 -- common/autotest_common.sh@941 -- # uname 00:16:04.351 15:58:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:04.351 15:58:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2417876 00:16:04.351 15:58:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:04.351 15:58:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:04.351 15:58:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2417876' 00:16:04.351 killing process with pid 2417876 00:16:04.351 15:58:43 -- common/autotest_common.sh@955 -- # kill 2417876 00:16:04.351 Received shutdown signal, test time was about 10.000000 seconds 00:16:04.351 00:16:04.351 Latency(us) 00:16:04.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.351 =================================================================================================================== 00:16:04.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.352 15:58:43 -- common/autotest_common.sh@960 -- # wait 2417876 00:16:05.287 15:58:44 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:05.546 15:58:45 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:05.546 15:58:45 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:05.546 15:58:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:05.546 15:58:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:05.546 15:58:45 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2414545 00:16:05.546 15:58:45 -- target/nvmf_lvs_grow.sh@74 -- # wait 2414545 00:16:05.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2414545 Killed "${NVMF_APP[@]}" "$@" 00:16:05.804 15:58:45 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:05.805 15:58:45 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:05.805 15:58:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:05.805 15:58:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:05.805 15:58:45 -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 15:58:45 -- nvmf/common.sh@470 -- # nvmfpid=2419961 00:16:05.805 15:58:45 -- nvmf/common.sh@471 -- # waitforlisten 2419961 00:16:05.805 15:58:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:05.805 15:58:45 -- common/autotest_common.sh@817 -- # '[' -z 2419961 ']' 00:16:05.805 15:58:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.805 15:58:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:05.805 15:58:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.805 15:58:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:05.805 15:58:45 -- common/autotest_common.sh@10 -- # set +x 00:16:05.805 [2024-04-26 15:58:45.372569] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:05.805 [2024-04-26 15:58:45.372655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.805 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.805 [2024-04-26 15:58:45.485553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.063 [2024-04-26 15:58:45.692698] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.063 [2024-04-26 15:58:45.692745] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.063 [2024-04-26 15:58:45.692755] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.063 [2024-04-26 15:58:45.692765] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.063 [2024-04-26 15:58:45.692776] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.063 [2024-04-26 15:58:45.692804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.630 15:58:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.630 15:58:46 -- common/autotest_common.sh@850 -- # return 0 00:16:06.630 15:58:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:06.630 15:58:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:06.630 15:58:46 -- common/autotest_common.sh@10 -- # set +x 00:16:06.630 15:58:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.630 15:58:46 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:06.889 [2024-04-26 15:58:46.334731] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:06.889 [2024-04-26 15:58:46.334876] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:06.889 [2024-04-26 15:58:46.334910] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:06.889 15:58:46 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:06.889 15:58:46 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 90078a33-2491-445b-b618-64c6b2a5a3d5 00:16:06.889 15:58:46 -- common/autotest_common.sh@885 -- # local bdev_name=90078a33-2491-445b-b618-64c6b2a5a3d5 00:16:06.889 15:58:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:06.889 15:58:46 -- common/autotest_common.sh@887 -- # local i 00:16:06.889 15:58:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:06.889 15:58:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:06.889 15:58:46 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:06.889 15:58:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 90078a33-2491-445b-b618-64c6b2a5a3d5 -t 2000 00:16:07.148 [ 00:16:07.148 { 00:16:07.148 "name": "90078a33-2491-445b-b618-64c6b2a5a3d5", 00:16:07.148 "aliases": [ 00:16:07.148 "lvs/lvol" 00:16:07.148 ], 00:16:07.148 "product_name": "Logical Volume", 00:16:07.148 "block_size": 4096, 00:16:07.148 "num_blocks": 38912, 00:16:07.148 "uuid": "90078a33-2491-445b-b618-64c6b2a5a3d5", 00:16:07.148 "assigned_rate_limits": { 00:16:07.148 "rw_ios_per_sec": 0, 00:16:07.148 "rw_mbytes_per_sec": 0, 00:16:07.148 "r_mbytes_per_sec": 0, 00:16:07.148 "w_mbytes_per_sec": 0 00:16:07.148 }, 00:16:07.148 "claimed": false, 00:16:07.148 "zoned": false, 00:16:07.148 "supported_io_types": { 00:16:07.148 "read": true, 00:16:07.148 "write": true, 00:16:07.148 "unmap": true, 00:16:07.148 "write_zeroes": true, 00:16:07.148 "flush": false, 00:16:07.148 "reset": true, 00:16:07.148 "compare": false, 00:16:07.148 "compare_and_write": false, 00:16:07.148 "abort": false, 00:16:07.148 "nvme_admin": false, 00:16:07.148 "nvme_io": false 00:16:07.148 }, 00:16:07.148 "driver_specific": { 00:16:07.148 "lvol": { 00:16:07.148 "lvol_store_uuid": "f17266b9-358c-4544-8332-5ad81add6c51", 00:16:07.148 "base_bdev": "aio_bdev", 00:16:07.148 "thin_provision": false, 00:16:07.148 "snapshot": false, 00:16:07.148 "clone": false, 00:16:07.148 "esnap_clone": false 00:16:07.148 } 00:16:07.148 } 00:16:07.148 } 00:16:07.148 ] 00:16:07.148 15:58:46 -- common/autotest_common.sh@893 -- # return 0 00:16:07.148 15:58:46 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:07.148 15:58:46 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:07.407 15:58:46 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:07.407 15:58:46 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:07.407 15:58:46 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:07.407 15:58:47 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:07.407 15:58:47 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:07.666 [2024-04-26 15:58:47.174965] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:07.666 15:58:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:07.666 15:58:47 -- common/autotest_common.sh@638 -- # local es=0 00:16:07.666 15:58:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:07.666 15:58:47 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.666 15:58:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:07.666 15:58:47 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.666 15:58:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:07.666 15:58:47 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.666 15:58:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:07.666 15:58:47 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.666 15:58:47 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:07.666 15:58:47 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:07.924 request: 00:16:07.924 { 00:16:07.924 "uuid": "f17266b9-358c-4544-8332-5ad81add6c51", 00:16:07.924 "method": "bdev_lvol_get_lvstores", 00:16:07.924 "req_id": 1 00:16:07.924 } 00:16:07.924 Got JSON-RPC error response 00:16:07.924 response: 00:16:07.924 { 00:16:07.924 "code": -19, 00:16:07.924 "message": "No such device" 00:16:07.924 } 00:16:07.924 15:58:47 -- common/autotest_common.sh@641 -- # es=1 00:16:07.924 15:58:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:07.924 15:58:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:07.924 15:58:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:07.924 15:58:47 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:07.924 aio_bdev 00:16:07.924 15:58:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 90078a33-2491-445b-b618-64c6b2a5a3d5 00:16:07.924 15:58:47 -- common/autotest_common.sh@885 -- # local bdev_name=90078a33-2491-445b-b618-64c6b2a5a3d5 00:16:07.924 15:58:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:07.924 15:58:47 -- common/autotest_common.sh@887 -- # local i 00:16:07.924 15:58:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:07.924 15:58:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:07.924 15:58:47 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:08.183 15:58:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 90078a33-2491-445b-b618-64c6b2a5a3d5 -t 2000 00:16:08.183 [ 00:16:08.183 { 00:16:08.183 "name": "90078a33-2491-445b-b618-64c6b2a5a3d5", 00:16:08.183 "aliases": [ 00:16:08.183 "lvs/lvol" 00:16:08.183 ], 00:16:08.183 "product_name": "Logical Volume", 00:16:08.183 "block_size": 4096, 00:16:08.183 "num_blocks": 38912, 00:16:08.183 "uuid": "90078a33-2491-445b-b618-64c6b2a5a3d5", 00:16:08.183 "assigned_rate_limits": { 00:16:08.183 "rw_ios_per_sec": 0, 00:16:08.183 "rw_mbytes_per_sec": 0, 00:16:08.183 "r_mbytes_per_sec": 0, 00:16:08.183 "w_mbytes_per_sec": 0 00:16:08.183 }, 00:16:08.183 "claimed": false, 00:16:08.183 "zoned": false, 00:16:08.183 "supported_io_types": { 00:16:08.183 "read": true, 00:16:08.183 "write": true, 00:16:08.183 "unmap": true, 00:16:08.183 "write_zeroes": true, 00:16:08.183 "flush": false, 00:16:08.183 "reset": true, 00:16:08.183 "compare": false, 00:16:08.183 "compare_and_write": false, 00:16:08.183 "abort": false, 00:16:08.183 "nvme_admin": false, 00:16:08.183 "nvme_io": false 00:16:08.183 }, 00:16:08.183 "driver_specific": { 00:16:08.183 "lvol": { 00:16:08.183 "lvol_store_uuid": "f17266b9-358c-4544-8332-5ad81add6c51", 00:16:08.183 "base_bdev": "aio_bdev", 00:16:08.183 "thin_provision": false, 00:16:08.183 "snapshot": false, 00:16:08.183 "clone": false, 00:16:08.183 "esnap_clone": false 00:16:08.183 } 00:16:08.183 } 00:16:08.183 } 00:16:08.183 ] 00:16:08.442 15:58:47 -- common/autotest_common.sh@893 -- # return 0 00:16:08.442 15:58:47 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:08.442 15:58:47 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:08.442 15:58:48 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:08.442 15:58:48 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:08.442 15:58:48 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:08.700 15:58:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:08.700 15:58:48 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 90078a33-2491-445b-b618-64c6b2a5a3d5 00:16:08.700 15:58:48 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f17266b9-358c-4544-8332-5ad81add6c51 00:16:08.959 15:58:48 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:09.217 15:58:48 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:09.217 00:16:09.217 real 0m18.214s 00:16:09.217 user 0m46.880s 00:16:09.217 sys 0m4.000s 00:16:09.217 15:58:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:09.217 15:58:48 -- common/autotest_common.sh@10 -- # set +x 00:16:09.217 ************************************ 00:16:09.217 END TEST lvs_grow_dirty 00:16:09.217 ************************************ 00:16:09.217 15:58:48 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:09.217 15:58:48 -- common/autotest_common.sh@794 -- # type=--id 00:16:09.217 15:58:48 -- common/autotest_common.sh@795 -- # id=0 00:16:09.217 15:58:48 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:09.217 15:58:48 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:09.217 15:58:48 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:09.217 15:58:48 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:09.217 15:58:48 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:09.217 15:58:48 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:09.217 nvmf_trace.0 00:16:09.217 15:58:48 -- common/autotest_common.sh@809 -- # return 0 00:16:09.217 15:58:48 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:09.217 15:58:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:09.217 15:58:48 -- nvmf/common.sh@117 -- # sync 00:16:09.217 15:58:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.217 15:58:48 -- nvmf/common.sh@120 -- # set +e 00:16:09.217 15:58:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.217 15:58:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.217 rmmod nvme_tcp 00:16:09.217 rmmod nvme_fabrics 00:16:09.217 rmmod nvme_keyring 00:16:09.475 15:58:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.475 15:58:48 -- nvmf/common.sh@124 -- # set -e 00:16:09.475 15:58:48 -- nvmf/common.sh@125 -- # return 0 00:16:09.475 15:58:48 -- nvmf/common.sh@478 -- # '[' -n 2419961 ']' 00:16:09.475 15:58:48 -- nvmf/common.sh@479 -- # killprocess 2419961 00:16:09.475 15:58:48 -- common/autotest_common.sh@936 -- # '[' -z 2419961 ']' 00:16:09.475 15:58:48 -- common/autotest_common.sh@940 -- # kill -0 2419961 00:16:09.475 15:58:48 -- common/autotest_common.sh@941 -- # uname 00:16:09.475 15:58:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:09.475 15:58:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2419961 00:16:09.475 15:58:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:09.475 15:58:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:09.475 15:58:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2419961' 00:16:09.475 killing process with pid 2419961 00:16:09.475 15:58:48 -- common/autotest_common.sh@955 -- # kill 2419961 00:16:09.475 15:58:48 -- common/autotest_common.sh@960 -- # wait 2419961 00:16:10.849 15:58:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:10.849 15:58:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:10.849 15:58:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:10.849 15:58:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.849 15:58:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.849 15:58:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.849 15:58:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.849 15:58:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.750 15:58:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:12.750 00:16:12.750 real 0m45.363s 00:16:12.750 user 1m9.717s 00:16:12.750 sys 0m10.436s 00:16:12.750 15:58:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:12.750 15:58:52 -- common/autotest_common.sh@10 -- # set +x 00:16:12.750 ************************************ 00:16:12.750 END TEST nvmf_lvs_grow 00:16:12.750 ************************************ 00:16:12.750 15:58:52 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:12.750 15:58:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:12.750 15:58:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.750 15:58:52 -- common/autotest_common.sh@10 -- # set +x 00:16:13.010 ************************************ 00:16:13.010 START TEST nvmf_bdev_io_wait 00:16:13.010 ************************************ 00:16:13.010 15:58:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:13.010 * Looking for test storage... 00:16:13.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.010 15:58:52 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.010 15:58:52 -- nvmf/common.sh@7 -- # uname -s 00:16:13.010 15:58:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.010 15:58:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.010 15:58:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.010 15:58:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.010 15:58:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.010 15:58:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.010 15:58:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.010 15:58:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.010 15:58:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.010 15:58:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.010 15:58:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.010 15:58:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.010 15:58:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.010 15:58:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.010 15:58:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.010 15:58:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.010 15:58:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.010 15:58:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.010 15:58:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.010 15:58:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.010 15:58:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.010 15:58:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.010 15:58:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.010 15:58:52 -- paths/export.sh@5 -- # export PATH 00:16:13.010 15:58:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.010 15:58:52 -- nvmf/common.sh@47 -- # : 0 00:16:13.010 15:58:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.010 15:58:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.010 15:58:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.010 15:58:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.010 15:58:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.010 15:58:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.010 15:58:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.010 15:58:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.010 15:58:52 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.010 15:58:52 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.010 15:58:52 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:13.010 15:58:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:13.010 15:58:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.010 15:58:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:13.010 15:58:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:13.010 15:58:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:13.010 15:58:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.011 15:58:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.011 15:58:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.011 15:58:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:13.011 15:58:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:13.011 15:58:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:13.011 15:58:52 -- common/autotest_common.sh@10 -- # set +x 00:16:18.288 15:58:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:18.288 15:58:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.288 15:58:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.288 15:58:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.288 15:58:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.288 15:58:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.288 15:58:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.288 15:58:57 -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.288 15:58:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.288 15:58:57 -- nvmf/common.sh@296 -- # e810=() 00:16:18.288 15:58:57 -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.288 15:58:57 -- nvmf/common.sh@297 -- # x722=() 00:16:18.288 15:58:57 -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.288 15:58:57 -- nvmf/common.sh@298 -- # mlx=() 00:16:18.288 15:58:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.288 15:58:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.288 15:58:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.288 15:58:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.288 15:58:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.288 15:58:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.288 15:58:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:18.288 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:18.288 15:58:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.288 15:58:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:18.288 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:18.288 15:58:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.288 15:58:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.288 15:58:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.288 15:58:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:18.288 15:58:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.288 15:58:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:18.288 Found net devices under 0000:86:00.0: cvl_0_0 00:16:18.288 15:58:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.288 15:58:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.288 15:58:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.288 15:58:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:18.288 15:58:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.288 15:58:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:18.288 Found net devices under 0000:86:00.1: cvl_0_1 00:16:18.288 15:58:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.288 15:58:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:18.288 15:58:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:18.288 15:58:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:18.288 15:58:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:18.288 15:58:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.288 15:58:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.288 15:58:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.288 15:58:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.288 15:58:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.288 15:58:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.288 15:58:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.288 15:58:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.288 15:58:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.288 15:58:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.288 15:58:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.288 15:58:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.288 15:58:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.288 15:58:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.288 15:58:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.288 15:58:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.288 15:58:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.288 15:58:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.288 15:58:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.288 15:58:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:16:18.288 00:16:18.288 --- 10.0.0.2 ping statistics --- 00:16:18.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.289 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:16:18.289 15:58:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:16:18.289 00:16:18.289 --- 10.0.0.1 ping statistics --- 00:16:18.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.289 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:16:18.289 15:58:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.289 15:58:57 -- nvmf/common.sh@411 -- # return 0 00:16:18.289 15:58:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:18.289 15:58:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.289 15:58:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:18.289 15:58:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:18.289 15:58:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.289 15:58:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:18.289 15:58:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:18.289 15:58:57 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:18.289 15:58:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:18.289 15:58:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:18.289 15:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:18.289 15:58:57 -- nvmf/common.sh@470 -- # nvmfpid=2424242 00:16:18.289 15:58:57 -- nvmf/common.sh@471 -- # waitforlisten 2424242 00:16:18.289 15:58:57 -- common/autotest_common.sh@817 -- # '[' -z 2424242 ']' 00:16:18.289 15:58:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.289 15:58:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:18.289 15:58:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.289 15:58:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:18.289 15:58:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:18.289 15:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:18.289 [2024-04-26 15:58:57.810850] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:18.289 [2024-04-26 15:58:57.810937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.289 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.289 [2024-04-26 15:58:57.919187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.549 [2024-04-26 15:58:58.137368] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.549 [2024-04-26 15:58:58.137414] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.549 [2024-04-26 15:58:58.137425] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.549 [2024-04-26 15:58:58.137436] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.549 [2024-04-26 15:58:58.137444] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.549 [2024-04-26 15:58:58.137519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.549 [2024-04-26 15:58:58.137587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.549 [2024-04-26 15:58:58.137650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.549 [2024-04-26 15:58:58.137660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.118 15:58:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:19.118 15:58:58 -- common/autotest_common.sh@850 -- # return 0 00:16:19.118 15:58:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:19.118 15:58:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:19.118 15:58:58 -- common/autotest_common.sh@10 -- # set +x 00:16:19.118 15:58:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.118 15:58:58 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:19.118 15:58:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.118 15:58:58 -- common/autotest_common.sh@10 -- # set +x 00:16:19.118 15:58:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.118 15:58:58 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:19.118 15:58:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.118 15:58:58 -- common/autotest_common.sh@10 -- # set +x 00:16:19.378 15:58:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.378 15:58:58 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:19.378 15:58:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.378 15:58:58 -- common/autotest_common.sh@10 -- # set +x 00:16:19.378 [2024-04-26 15:58:58.929056] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.378 15:58:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.378 15:58:58 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:19.378 15:58:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.378 15:58:58 -- common/autotest_common.sh@10 -- # set +x 00:16:19.378 Malloc0 00:16:19.378 15:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.378 15:58:59 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:19.378 15:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.378 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:16:19.378 15:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.378 15:58:59 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:19.378 15:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.378 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:16:19.639 15:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.639 15:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.639 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:16:19.639 [2024-04-26 15:58:59.065866] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.639 15:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2424494 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@30 -- # READ_PID=2424496 00:16:19.639 15:58:59 -- nvmf/common.sh@521 -- # config=() 00:16:19.639 15:58:59 -- nvmf/common.sh@521 -- # local subsystem config 00:16:19.639 15:58:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:19.639 15:58:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:19.639 { 00:16:19.639 "params": { 00:16:19.639 "name": "Nvme$subsystem", 00:16:19.639 "trtype": "$TEST_TRANSPORT", 00:16:19.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.639 "adrfam": "ipv4", 00:16:19.639 "trsvcid": "$NVMF_PORT", 00:16:19.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.639 "hdgst": ${hdgst:-false}, 00:16:19.639 "ddgst": ${ddgst:-false} 00:16:19.639 }, 00:16:19.639 "method": "bdev_nvme_attach_controller" 00:16:19.639 } 00:16:19.639 EOF 00:16:19.639 )") 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2424498 00:16:19.639 15:58:59 -- nvmf/common.sh@521 -- # config=() 00:16:19.639 15:58:59 -- nvmf/common.sh@521 -- # local subsystem config 00:16:19.639 15:58:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:19.639 15:58:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:19.639 { 00:16:19.639 "params": { 00:16:19.639 "name": "Nvme$subsystem", 00:16:19.639 "trtype": "$TEST_TRANSPORT", 00:16:19.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.639 "adrfam": "ipv4", 00:16:19.639 "trsvcid": "$NVMF_PORT", 00:16:19.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.639 "hdgst": ${hdgst:-false}, 00:16:19.639 "ddgst": ${ddgst:-false} 00:16:19.639 }, 00:16:19.639 "method": "bdev_nvme_attach_controller" 00:16:19.639 } 00:16:19.639 EOF 00:16:19.639 )") 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2424501 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@35 -- # sync 00:16:19.639 15:58:59 -- nvmf/common.sh@543 -- # cat 00:16:19.639 15:58:59 -- nvmf/common.sh@521 -- # config=() 00:16:19.639 15:58:59 -- nvmf/common.sh@521 -- # local subsystem config 00:16:19.639 15:58:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:19.639 15:58:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:19.639 { 00:16:19.639 "params": { 00:16:19.639 "name": "Nvme$subsystem", 00:16:19.639 "trtype": "$TEST_TRANSPORT", 00:16:19.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.639 "adrfam": "ipv4", 00:16:19.639 "trsvcid": "$NVMF_PORT", 00:16:19.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.639 "hdgst": ${hdgst:-false}, 00:16:19.639 "ddgst": ${ddgst:-false} 00:16:19.639 }, 00:16:19.639 "method": "bdev_nvme_attach_controller" 00:16:19.639 } 00:16:19.639 EOF 00:16:19.639 )") 00:16:19.639 15:58:59 -- nvmf/common.sh@521 -- # config=() 00:16:19.639 15:58:59 -- nvmf/common.sh@543 -- # cat 00:16:19.639 15:58:59 -- nvmf/common.sh@521 -- # local subsystem config 00:16:19.639 15:58:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:19.639 15:58:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:19.639 { 00:16:19.639 "params": { 00:16:19.639 "name": "Nvme$subsystem", 00:16:19.639 "trtype": "$TEST_TRANSPORT", 00:16:19.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:19.639 "adrfam": "ipv4", 00:16:19.639 "trsvcid": "$NVMF_PORT", 00:16:19.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:19.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:19.639 "hdgst": ${hdgst:-false}, 00:16:19.639 "ddgst": ${ddgst:-false} 00:16:19.639 }, 00:16:19.639 "method": "bdev_nvme_attach_controller" 00:16:19.639 } 00:16:19.639 EOF 00:16:19.639 )") 00:16:19.639 15:58:59 -- nvmf/common.sh@543 -- # cat 00:16:19.639 15:58:59 -- target/bdev_io_wait.sh@37 -- # wait 2424494 00:16:19.639 15:58:59 -- nvmf/common.sh@543 -- # cat 00:16:19.639 15:58:59 -- nvmf/common.sh@545 -- # jq . 00:16:19.639 15:58:59 -- nvmf/common.sh@545 -- # jq . 00:16:19.639 15:58:59 -- nvmf/common.sh@545 -- # jq . 00:16:19.639 15:58:59 -- nvmf/common.sh@546 -- # IFS=, 00:16:19.639 15:58:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:19.639 "params": { 00:16:19.639 "name": "Nvme1", 00:16:19.639 "trtype": "tcp", 00:16:19.639 "traddr": "10.0.0.2", 00:16:19.639 "adrfam": "ipv4", 00:16:19.639 "trsvcid": "4420", 00:16:19.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.639 "hdgst": false, 00:16:19.639 "ddgst": false 00:16:19.639 }, 00:16:19.639 "method": "bdev_nvme_attach_controller" 00:16:19.639 }' 00:16:19.639 15:58:59 -- nvmf/common.sh@545 -- # jq . 00:16:19.639 15:58:59 -- nvmf/common.sh@546 -- # IFS=, 00:16:19.639 15:58:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:19.639 "params": { 00:16:19.639 "name": "Nvme1", 00:16:19.639 "trtype": "tcp", 00:16:19.639 "traddr": "10.0.0.2", 00:16:19.639 "adrfam": "ipv4", 00:16:19.639 "trsvcid": "4420", 00:16:19.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.639 "hdgst": false, 00:16:19.639 "ddgst": false 00:16:19.639 }, 00:16:19.640 "method": "bdev_nvme_attach_controller" 00:16:19.640 }' 00:16:19.640 15:58:59 -- nvmf/common.sh@546 -- # IFS=, 00:16:19.640 15:58:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:19.640 "params": { 00:16:19.640 "name": "Nvme1", 00:16:19.640 "trtype": "tcp", 00:16:19.640 "traddr": "10.0.0.2", 00:16:19.640 "adrfam": "ipv4", 00:16:19.640 "trsvcid": "4420", 00:16:19.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.640 "hdgst": false, 00:16:19.640 "ddgst": false 00:16:19.640 }, 00:16:19.640 "method": "bdev_nvme_attach_controller" 00:16:19.640 }' 00:16:19.640 15:58:59 -- nvmf/common.sh@546 -- # IFS=, 00:16:19.640 15:58:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:19.640 "params": { 00:16:19.640 "name": "Nvme1", 00:16:19.640 "trtype": "tcp", 00:16:19.640 "traddr": "10.0.0.2", 00:16:19.640 "adrfam": "ipv4", 00:16:19.640 "trsvcid": "4420", 00:16:19.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.640 "hdgst": false, 00:16:19.640 "ddgst": false 00:16:19.640 }, 00:16:19.640 "method": "bdev_nvme_attach_controller" 00:16:19.640 }' 00:16:19.640 [2024-04-26 15:58:59.142514] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:19.640 [2024-04-26 15:58:59.142615] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:19.640 [2024-04-26 15:58:59.144676] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:19.640 [2024-04-26 15:58:59.144730] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:19.640 [2024-04-26 15:58:59.144753] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:19.640 [2024-04-26 15:58:59.144834] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:19.640 [2024-04-26 15:58:59.148394] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:19.640 [2024-04-26 15:58:59.148489] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:19.640 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.640 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.899 [2024-04-26 15:58:59.367633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.899 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.899 [2024-04-26 15:58:59.454832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.899 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.899 [2024-04-26 15:58:59.561839] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.158 [2024-04-26 15:58:59.592323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:20.158 [2024-04-26 15:58:59.606839] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.158 [2024-04-26 15:58:59.667859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:20.158 [2024-04-26 15:58:59.787843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:20.158 [2024-04-26 15:58:59.817632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:20.417 Running I/O for 1 seconds... 00:16:20.675 Running I/O for 1 seconds... 00:16:20.934 Running I/O for 1 seconds... 00:16:20.934 Running I/O for 1 seconds... 00:16:21.502 00:16:21.502 Latency(us) 00:16:21.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.502 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:21.502 Nvme1n1 : 1.02 7584.68 29.63 0.00 0.00 16747.66 2863.64 24390.79 00:16:21.502 =================================================================================================================== 00:16:21.502 Total : 7584.68 29.63 0.00 0.00 16747.66 2863.64 24390.79 00:16:21.761 00:16:21.761 Latency(us) 00:16:21.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.761 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:21.761 Nvme1n1 : 1.01 7259.45 28.36 0.00 0.00 17577.22 7038.00 32141.13 00:16:21.761 =================================================================================================================== 00:16:21.761 Total : 7259.45 28.36 0.00 0.00 17577.22 7038.00 32141.13 00:16:21.761 00:16:21.761 Latency(us) 00:16:21.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.761 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:21.761 Nvme1n1 : 1.01 10385.93 40.57 0.00 0.00 12281.32 5071.92 21655.37 00:16:21.761 =================================================================================================================== 00:16:21.761 Total : 10385.93 40.57 0.00 0.00 12281.32 5071.92 21655.37 00:16:22.019 00:16:22.019 Latency(us) 00:16:22.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.019 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:22.019 Nvme1n1 : 1.00 218658.47 854.13 0.00 0.00 583.29 240.42 769.34 00:16:22.019 =================================================================================================================== 00:16:22.019 Total : 218658.47 854.13 0.00 0.00 583.29 240.42 769.34 00:16:22.587 15:59:02 -- target/bdev_io_wait.sh@38 -- # wait 2424496 00:16:22.845 15:59:02 -- target/bdev_io_wait.sh@39 -- # wait 2424498 00:16:22.845 15:59:02 -- target/bdev_io_wait.sh@40 -- # wait 2424501 00:16:22.845 15:59:02 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.845 15:59:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.845 15:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:22.845 15:59:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.845 15:59:02 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:22.845 15:59:02 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:22.845 15:59:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:22.845 15:59:02 -- nvmf/common.sh@117 -- # sync 00:16:22.845 15:59:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.845 15:59:02 -- nvmf/common.sh@120 -- # set +e 00:16:22.845 15:59:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.845 15:59:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.845 rmmod nvme_tcp 00:16:22.845 rmmod nvme_fabrics 00:16:22.845 rmmod nvme_keyring 00:16:23.104 15:59:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.104 15:59:02 -- nvmf/common.sh@124 -- # set -e 00:16:23.104 15:59:02 -- nvmf/common.sh@125 -- # return 0 00:16:23.104 15:59:02 -- nvmf/common.sh@478 -- # '[' -n 2424242 ']' 00:16:23.104 15:59:02 -- nvmf/common.sh@479 -- # killprocess 2424242 00:16:23.104 15:59:02 -- common/autotest_common.sh@936 -- # '[' -z 2424242 ']' 00:16:23.104 15:59:02 -- common/autotest_common.sh@940 -- # kill -0 2424242 00:16:23.104 15:59:02 -- common/autotest_common.sh@941 -- # uname 00:16:23.104 15:59:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:23.104 15:59:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2424242 00:16:23.104 15:59:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:23.104 15:59:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:23.104 15:59:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2424242' 00:16:23.104 killing process with pid 2424242 00:16:23.104 15:59:02 -- common/autotest_common.sh@955 -- # kill 2424242 00:16:23.104 15:59:02 -- common/autotest_common.sh@960 -- # wait 2424242 00:16:24.482 15:59:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:24.482 15:59:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:24.482 15:59:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:24.482 15:59:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:24.482 15:59:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:24.482 15:59:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.482 15:59:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.482 15:59:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.391 15:59:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.391 00:16:26.391 real 0m13.416s 00:16:26.391 user 0m33.282s 00:16:26.391 sys 0m5.980s 00:16:26.391 15:59:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:26.391 15:59:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.391 ************************************ 00:16:26.391 END TEST nvmf_bdev_io_wait 00:16:26.391 ************************************ 00:16:26.391 15:59:05 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:26.391 15:59:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:26.391 15:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.391 15:59:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.391 ************************************ 00:16:26.391 START TEST nvmf_queue_depth 00:16:26.391 ************************************ 00:16:26.391 15:59:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:26.651 * Looking for test storage... 00:16:26.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.651 15:59:06 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.651 15:59:06 -- nvmf/common.sh@7 -- # uname -s 00:16:26.651 15:59:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.651 15:59:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.651 15:59:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.651 15:59:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.651 15:59:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.651 15:59:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.651 15:59:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.651 15:59:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.651 15:59:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.651 15:59:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.651 15:59:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.651 15:59:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.651 15:59:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.651 15:59:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.651 15:59:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.651 15:59:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.651 15:59:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.651 15:59:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.651 15:59:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.651 15:59:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.651 15:59:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.651 15:59:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.651 15:59:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.651 15:59:06 -- paths/export.sh@5 -- # export PATH 00:16:26.651 15:59:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.651 15:59:06 -- nvmf/common.sh@47 -- # : 0 00:16:26.651 15:59:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.651 15:59:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.651 15:59:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.651 15:59:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.651 15:59:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.651 15:59:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.651 15:59:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.651 15:59:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.651 15:59:06 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:26.651 15:59:06 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:26.651 15:59:06 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:26.651 15:59:06 -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:26.651 15:59:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:26.651 15:59:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.651 15:59:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:26.651 15:59:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:26.651 15:59:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:26.651 15:59:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.651 15:59:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.651 15:59:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.651 15:59:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:26.651 15:59:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:26.651 15:59:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.651 15:59:06 -- common/autotest_common.sh@10 -- # set +x 00:16:31.924 15:59:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:31.924 15:59:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:31.924 15:59:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:31.924 15:59:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:31.924 15:59:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:31.924 15:59:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:31.924 15:59:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:31.924 15:59:11 -- nvmf/common.sh@295 -- # net_devs=() 00:16:31.924 15:59:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:31.924 15:59:11 -- nvmf/common.sh@296 -- # e810=() 00:16:31.924 15:59:11 -- nvmf/common.sh@296 -- # local -ga e810 00:16:31.924 15:59:11 -- nvmf/common.sh@297 -- # x722=() 00:16:31.924 15:59:11 -- nvmf/common.sh@297 -- # local -ga x722 00:16:31.924 15:59:11 -- nvmf/common.sh@298 -- # mlx=() 00:16:31.924 15:59:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:31.924 15:59:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.924 15:59:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:31.924 15:59:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:31.924 15:59:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:31.924 15:59:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.924 15:59:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:31.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:31.924 15:59:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.924 15:59:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:31.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:31.924 15:59:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:31.924 15:59:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.924 15:59:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.924 15:59:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:31.924 15:59:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.924 15:59:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:31.924 Found net devices under 0000:86:00.0: cvl_0_0 00:16:31.924 15:59:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.924 15:59:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.924 15:59:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.924 15:59:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:31.924 15:59:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.924 15:59:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:31.924 Found net devices under 0000:86:00.1: cvl_0_1 00:16:31.924 15:59:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.924 15:59:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:31.924 15:59:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:31.924 15:59:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:31.924 15:59:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:31.924 15:59:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.924 15:59:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.924 15:59:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.924 15:59:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:31.924 15:59:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.924 15:59:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.924 15:59:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:31.924 15:59:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.924 15:59:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.924 15:59:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:31.924 15:59:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:31.924 15:59:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.924 15:59:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.924 15:59:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.924 15:59:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.924 15:59:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:31.924 15:59:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.924 15:59:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.181 15:59:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.181 15:59:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:32.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:16:32.181 00:16:32.181 --- 10.0.0.2 ping statistics --- 00:16:32.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.181 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:16:32.181 15:59:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:16:32.181 00:16:32.181 --- 10.0.0.1 ping statistics --- 00:16:32.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.181 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:16:32.181 15:59:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.181 15:59:11 -- nvmf/common.sh@411 -- # return 0 00:16:32.181 15:59:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:32.182 15:59:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.182 15:59:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:32.182 15:59:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:32.182 15:59:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.182 15:59:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:32.182 15:59:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:32.182 15:59:11 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:32.182 15:59:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:32.182 15:59:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:32.182 15:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.182 15:59:11 -- nvmf/common.sh@470 -- # nvmfpid=2428742 00:16:32.182 15:59:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:32.182 15:59:11 -- nvmf/common.sh@471 -- # waitforlisten 2428742 00:16:32.182 15:59:11 -- common/autotest_common.sh@817 -- # '[' -z 2428742 ']' 00:16:32.182 15:59:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.182 15:59:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.182 15:59:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.182 15:59:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.182 15:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.182 [2024-04-26 15:59:11.769000] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:32.182 [2024-04-26 15:59:11.769093] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.182 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.439 [2024-04-26 15:59:11.879713] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.439 [2024-04-26 15:59:12.087668] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.439 [2024-04-26 15:59:12.087716] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.439 [2024-04-26 15:59:12.087726] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:32.439 [2024-04-26 15:59:12.087736] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:32.439 [2024-04-26 15:59:12.087747] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.439 [2024-04-26 15:59:12.087777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.058 15:59:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.058 15:59:12 -- common/autotest_common.sh@850 -- # return 0 00:16:33.059 15:59:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:33.059 15:59:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:33.059 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 15:59:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.059 15:59:12 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.059 15:59:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.059 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 [2024-04-26 15:59:12.578333] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.059 15:59:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.059 15:59:12 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:33.059 15:59:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.059 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 Malloc0 00:16:33.059 15:59:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.059 15:59:12 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:33.059 15:59:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.059 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 15:59:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.059 15:59:12 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:33.059 15:59:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.059 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 15:59:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.059 15:59:12 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.059 15:59:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.059 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:33.059 [2024-04-26 15:59:12.702198] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.059 15:59:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.059 15:59:12 -- target/queue_depth.sh@30 -- # bdevperf_pid=2428923 00:16:33.059 15:59:12 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:33.059 15:59:12 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:33.059 15:59:12 -- target/queue_depth.sh@33 -- # waitforlisten 2428923 /var/tmp/bdevperf.sock 00:16:33.059 15:59:12 -- common/autotest_common.sh@817 -- # '[' -z 2428923 ']' 00:16:33.059 15:59:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.059 15:59:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:33.059 15:59:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.059 15:59:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:33.059 15:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:33.317 [2024-04-26 15:59:12.775183] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:33.317 [2024-04-26 15:59:12.775276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2428923 ] 00:16:33.317 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.317 [2024-04-26 15:59:12.879603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.576 [2024-04-26 15:59:13.104922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.141 15:59:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:34.141 15:59:13 -- common/autotest_common.sh@850 -- # return 0 00:16:34.141 15:59:13 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:34.141 15:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.141 15:59:13 -- common/autotest_common.sh@10 -- # set +x 00:16:34.141 NVMe0n1 00:16:34.141 15:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.141 15:59:13 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:34.141 Running I/O for 10 seconds... 00:16:46.348 00:16:46.348 Latency(us) 00:16:46.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.348 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:46.348 Verification LBA range: start 0x0 length 0x4000 00:16:46.348 NVMe0n1 : 10.08 10466.32 40.88 0.00 0.00 97481.63 21199.47 63826.37 00:16:46.348 =================================================================================================================== 00:16:46.348 Total : 10466.32 40.88 0.00 0.00 97481.63 21199.47 63826.37 00:16:46.348 0 00:16:46.348 15:59:23 -- target/queue_depth.sh@39 -- # killprocess 2428923 00:16:46.348 15:59:23 -- common/autotest_common.sh@936 -- # '[' -z 2428923 ']' 00:16:46.348 15:59:23 -- common/autotest_common.sh@940 -- # kill -0 2428923 00:16:46.348 15:59:23 -- common/autotest_common.sh@941 -- # uname 00:16:46.348 15:59:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:46.348 15:59:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2428923 00:16:46.348 15:59:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:46.348 15:59:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:46.348 15:59:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2428923' 00:16:46.348 killing process with pid 2428923 00:16:46.348 15:59:23 -- common/autotest_common.sh@955 -- # kill 2428923 00:16:46.348 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.348 00:16:46.348 Latency(us) 00:16:46.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.348 =================================================================================================================== 00:16:46.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:46.348 15:59:23 -- common/autotest_common.sh@960 -- # wait 2428923 00:16:46.348 15:59:24 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:46.348 15:59:24 -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:46.348 15:59:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:46.348 15:59:24 -- nvmf/common.sh@117 -- # sync 00:16:46.348 15:59:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.348 15:59:24 -- nvmf/common.sh@120 -- # set +e 00:16:46.348 15:59:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.348 15:59:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.348 rmmod nvme_tcp 00:16:46.348 rmmod nvme_fabrics 00:16:46.348 rmmod nvme_keyring 00:16:46.348 15:59:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.348 15:59:24 -- nvmf/common.sh@124 -- # set -e 00:16:46.348 15:59:24 -- nvmf/common.sh@125 -- # return 0 00:16:46.348 15:59:24 -- nvmf/common.sh@478 -- # '[' -n 2428742 ']' 00:16:46.348 15:59:24 -- nvmf/common.sh@479 -- # killprocess 2428742 00:16:46.348 15:59:24 -- common/autotest_common.sh@936 -- # '[' -z 2428742 ']' 00:16:46.348 15:59:24 -- common/autotest_common.sh@940 -- # kill -0 2428742 00:16:46.348 15:59:24 -- common/autotest_common.sh@941 -- # uname 00:16:46.348 15:59:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:46.348 15:59:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2428742 00:16:46.348 15:59:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:46.348 15:59:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:46.348 15:59:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2428742' 00:16:46.348 killing process with pid 2428742 00:16:46.348 15:59:25 -- common/autotest_common.sh@955 -- # kill 2428742 00:16:46.348 15:59:25 -- common/autotest_common.sh@960 -- # wait 2428742 00:16:46.914 15:59:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:46.914 15:59:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:46.914 15:59:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:46.914 15:59:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.914 15:59:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:46.914 15:59:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.914 15:59:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.914 15:59:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.816 15:59:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.075 00:16:49.075 real 0m22.467s 00:16:49.075 user 0m27.553s 00:16:49.075 sys 0m5.978s 00:16:49.075 15:59:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:49.075 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:16:49.075 ************************************ 00:16:49.075 END TEST nvmf_queue_depth 00:16:49.075 ************************************ 00:16:49.075 15:59:28 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:49.075 15:59:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.075 15:59:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.075 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:16:49.075 ************************************ 00:16:49.075 START TEST nvmf_multipath 00:16:49.075 ************************************ 00:16:49.075 15:59:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:49.075 * Looking for test storage... 00:16:49.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.075 15:59:28 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.075 15:59:28 -- nvmf/common.sh@7 -- # uname -s 00:16:49.075 15:59:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.075 15:59:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.075 15:59:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.075 15:59:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.075 15:59:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.075 15:59:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.075 15:59:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.075 15:59:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.075 15:59:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.075 15:59:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.334 15:59:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.334 15:59:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.334 15:59:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.334 15:59:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.334 15:59:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.334 15:59:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.334 15:59:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.334 15:59:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.334 15:59:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.334 15:59:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.334 15:59:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.334 15:59:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.334 15:59:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.334 15:59:28 -- paths/export.sh@5 -- # export PATH 00:16:49.334 15:59:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.334 15:59:28 -- nvmf/common.sh@47 -- # : 0 00:16:49.334 15:59:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.334 15:59:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.334 15:59:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.334 15:59:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.334 15:59:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.334 15:59:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.334 15:59:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.334 15:59:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.334 15:59:28 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.334 15:59:28 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.334 15:59:28 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:49.334 15:59:28 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.334 15:59:28 -- target/multipath.sh@43 -- # nvmftestinit 00:16:49.334 15:59:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:49.334 15:59:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.334 15:59:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:49.334 15:59:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:49.334 15:59:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:49.334 15:59:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.334 15:59:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.334 15:59:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.334 15:59:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:49.334 15:59:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:49.334 15:59:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.334 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:16:54.607 15:59:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:54.607 15:59:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.607 15:59:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.607 15:59:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.607 15:59:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.607 15:59:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.607 15:59:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.607 15:59:33 -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.607 15:59:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.607 15:59:33 -- nvmf/common.sh@296 -- # e810=() 00:16:54.607 15:59:33 -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.607 15:59:33 -- nvmf/common.sh@297 -- # x722=() 00:16:54.607 15:59:33 -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.607 15:59:33 -- nvmf/common.sh@298 -- # mlx=() 00:16:54.607 15:59:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.607 15:59:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.607 15:59:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.607 15:59:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.607 15:59:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.607 15:59:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.607 15:59:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:54.607 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:54.607 15:59:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.607 15:59:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:54.607 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:54.607 15:59:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.607 15:59:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.607 15:59:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.607 15:59:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:54.607 15:59:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.607 15:59:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:54.607 Found net devices under 0000:86:00.0: cvl_0_0 00:16:54.607 15:59:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.607 15:59:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.607 15:59:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.607 15:59:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:54.607 15:59:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.607 15:59:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:54.607 Found net devices under 0000:86:00.1: cvl_0_1 00:16:54.607 15:59:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.607 15:59:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:54.607 15:59:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:54.607 15:59:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:54.607 15:59:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:54.607 15:59:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.607 15:59:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.607 15:59:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.607 15:59:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.607 15:59:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.607 15:59:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.607 15:59:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.607 15:59:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.607 15:59:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.607 15:59:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.607 15:59:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.607 15:59:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.607 15:59:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.607 15:59:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.607 15:59:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.607 15:59:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.607 15:59:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.607 15:59:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.607 15:59:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.607 15:59:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:16:54.607 00:16:54.607 --- 10.0.0.2 ping statistics --- 00:16:54.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.607 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:16:54.607 15:59:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:16:54.607 00:16:54.607 --- 10.0.0.1 ping statistics --- 00:16:54.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.607 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:54.607 15:59:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.607 15:59:34 -- nvmf/common.sh@411 -- # return 0 00:16:54.607 15:59:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:54.607 15:59:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.607 15:59:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:54.607 15:59:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:54.607 15:59:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.607 15:59:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:54.607 15:59:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:54.607 15:59:34 -- target/multipath.sh@45 -- # '[' -z ']' 00:16:54.607 15:59:34 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:54.607 only one NIC for nvmf test 00:16:54.607 15:59:34 -- target/multipath.sh@47 -- # nvmftestfini 00:16:54.607 15:59:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:54.607 15:59:34 -- nvmf/common.sh@117 -- # sync 00:16:54.607 15:59:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.607 15:59:34 -- nvmf/common.sh@120 -- # set +e 00:16:54.607 15:59:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.607 15:59:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.607 rmmod nvme_tcp 00:16:54.607 rmmod nvme_fabrics 00:16:54.607 rmmod nvme_keyring 00:16:54.607 15:59:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.607 15:59:34 -- nvmf/common.sh@124 -- # set -e 00:16:54.607 15:59:34 -- nvmf/common.sh@125 -- # return 0 00:16:54.607 15:59:34 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:54.607 15:59:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:54.607 15:59:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:54.607 15:59:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:54.607 15:59:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.607 15:59:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.607 15:59:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.607 15:59:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.607 15:59:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.514 15:59:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.514 15:59:36 -- target/multipath.sh@48 -- # exit 0 00:16:56.514 15:59:36 -- target/multipath.sh@1 -- # nvmftestfini 00:16:56.514 15:59:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:56.514 15:59:36 -- nvmf/common.sh@117 -- # sync 00:16:56.774 15:59:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.774 15:59:36 -- nvmf/common.sh@120 -- # set +e 00:16:56.774 15:59:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.774 15:59:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.774 15:59:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.774 15:59:36 -- nvmf/common.sh@124 -- # set -e 00:16:56.774 15:59:36 -- nvmf/common.sh@125 -- # return 0 00:16:56.774 15:59:36 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:56.774 15:59:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:56.774 15:59:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:56.774 15:59:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:56.774 15:59:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.774 15:59:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.774 15:59:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.774 15:59:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.774 15:59:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.774 15:59:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.774 00:16:56.774 real 0m7.569s 00:16:56.774 user 0m1.515s 00:16:56.774 sys 0m4.058s 00:16:56.774 15:59:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.774 15:59:36 -- common/autotest_common.sh@10 -- # set +x 00:16:56.774 ************************************ 00:16:56.774 END TEST nvmf_multipath 00:16:56.774 ************************************ 00:16:56.774 15:59:36 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:56.774 15:59:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.774 15:59:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.774 15:59:36 -- common/autotest_common.sh@10 -- # set +x 00:16:56.774 ************************************ 00:16:56.774 START TEST nvmf_zcopy 00:16:56.774 ************************************ 00:16:56.774 15:59:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:56.774 * Looking for test storage... 00:16:56.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.774 15:59:36 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.774 15:59:36 -- nvmf/common.sh@7 -- # uname -s 00:16:56.774 15:59:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.774 15:59:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.774 15:59:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.774 15:59:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.774 15:59:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.774 15:59:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.774 15:59:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.774 15:59:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.774 15:59:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.774 15:59:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.774 15:59:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.774 15:59:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.774 15:59:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.774 15:59:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.774 15:59:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.774 15:59:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.774 15:59:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.774 15:59:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.774 15:59:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.774 15:59:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.775 15:59:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.775 15:59:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.775 15:59:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.775 15:59:36 -- paths/export.sh@5 -- # export PATH 00:16:56.775 15:59:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.775 15:59:36 -- nvmf/common.sh@47 -- # : 0 00:16:56.775 15:59:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.775 15:59:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.775 15:59:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.775 15:59:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.775 15:59:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.775 15:59:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.775 15:59:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:57.035 15:59:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:57.035 15:59:36 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:57.035 15:59:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:57.035 15:59:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.035 15:59:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:57.035 15:59:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:57.035 15:59:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:57.035 15:59:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.035 15:59:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.035 15:59:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.035 15:59:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:57.035 15:59:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:57.035 15:59:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:57.035 15:59:36 -- common/autotest_common.sh@10 -- # set +x 00:17:02.313 15:59:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:02.313 15:59:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.313 15:59:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.313 15:59:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.313 15:59:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.313 15:59:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.313 15:59:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.313 15:59:41 -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.313 15:59:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.313 15:59:41 -- nvmf/common.sh@296 -- # e810=() 00:17:02.313 15:59:41 -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.313 15:59:41 -- nvmf/common.sh@297 -- # x722=() 00:17:02.313 15:59:41 -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.313 15:59:41 -- nvmf/common.sh@298 -- # mlx=() 00:17:02.313 15:59:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.313 15:59:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.313 15:59:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.313 15:59:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.313 15:59:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.313 15:59:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.313 15:59:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:02.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:02.313 15:59:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.313 15:59:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:02.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:02.313 15:59:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.313 15:59:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.313 15:59:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.313 15:59:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.313 15:59:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:02.313 15:59:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.314 15:59:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:02.314 Found net devices under 0000:86:00.0: cvl_0_0 00:17:02.314 15:59:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.314 15:59:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.314 15:59:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.314 15:59:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:02.314 15:59:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.314 15:59:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:02.314 Found net devices under 0000:86:00.1: cvl_0_1 00:17:02.314 15:59:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.314 15:59:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:02.314 15:59:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:02.314 15:59:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:02.314 15:59:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:02.314 15:59:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:02.314 15:59:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.314 15:59:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.314 15:59:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.314 15:59:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.314 15:59:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.314 15:59:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.314 15:59:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.314 15:59:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.314 15:59:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.314 15:59:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.314 15:59:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.314 15:59:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.314 15:59:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.314 15:59:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.314 15:59:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.314 15:59:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.314 15:59:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.314 15:59:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.314 15:59:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.314 15:59:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:17:02.314 00:17:02.314 --- 10.0.0.2 ping statistics --- 00:17:02.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.314 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:02.314 15:59:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:17:02.314 00:17:02.314 --- 10.0.0.1 ping statistics --- 00:17:02.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.314 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:17:02.314 15:59:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.314 15:59:41 -- nvmf/common.sh@411 -- # return 0 00:17:02.314 15:59:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:02.314 15:59:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.314 15:59:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:02.314 15:59:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:02.314 15:59:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.314 15:59:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:02.314 15:59:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:02.314 15:59:41 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:02.314 15:59:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:02.314 15:59:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:02.314 15:59:41 -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 15:59:41 -- nvmf/common.sh@470 -- # nvmfpid=2437876 00:17:02.314 15:59:41 -- nvmf/common.sh@471 -- # waitforlisten 2437876 00:17:02.314 15:59:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:02.314 15:59:41 -- common/autotest_common.sh@817 -- # '[' -z 2437876 ']' 00:17:02.314 15:59:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.314 15:59:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:02.314 15:59:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.314 15:59:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:02.314 15:59:41 -- common/autotest_common.sh@10 -- # set +x 00:17:02.314 [2024-04-26 15:59:41.560349] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:02.314 [2024-04-26 15:59:41.560440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.314 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.314 [2024-04-26 15:59:41.668652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.314 [2024-04-26 15:59:41.887087] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.314 [2024-04-26 15:59:41.887135] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.314 [2024-04-26 15:59:41.887146] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.314 [2024-04-26 15:59:41.887156] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.314 [2024-04-26 15:59:41.887166] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.314 [2024-04-26 15:59:41.887198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.883 15:59:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:02.883 15:59:42 -- common/autotest_common.sh@850 -- # return 0 00:17:02.883 15:59:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:02.883 15:59:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:02.883 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:02.883 15:59:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.883 15:59:42 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:02.883 15:59:42 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:02.883 15:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.883 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:02.883 [2024-04-26 15:59:42.361747] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.883 15:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.883 15:59:42 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:02.883 15:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.883 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:02.883 15:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.883 15:59:42 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.883 15:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.883 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:02.883 [2024-04-26 15:59:42.377913] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.883 15:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.883 15:59:42 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:02.883 15:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.883 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:02.883 15:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.883 15:59:42 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:02.883 15:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.883 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:02.883 malloc0 00:17:02.883 15:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.883 15:59:42 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:02.883 15:59:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.883 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:02.883 15:59:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.883 15:59:42 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:02.883 15:59:42 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:02.883 15:59:42 -- nvmf/common.sh@521 -- # config=() 00:17:02.883 15:59:42 -- nvmf/common.sh@521 -- # local subsystem config 00:17:02.883 15:59:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:02.883 15:59:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:02.883 { 00:17:02.883 "params": { 00:17:02.883 "name": "Nvme$subsystem", 00:17:02.883 "trtype": "$TEST_TRANSPORT", 00:17:02.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:02.883 "adrfam": "ipv4", 00:17:02.883 "trsvcid": "$NVMF_PORT", 00:17:02.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:02.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:02.883 "hdgst": ${hdgst:-false}, 00:17:02.883 "ddgst": ${ddgst:-false} 00:17:02.883 }, 00:17:02.883 "method": "bdev_nvme_attach_controller" 00:17:02.883 } 00:17:02.883 EOF 00:17:02.883 )") 00:17:02.883 15:59:42 -- nvmf/common.sh@543 -- # cat 00:17:02.883 15:59:42 -- nvmf/common.sh@545 -- # jq . 00:17:02.883 15:59:42 -- nvmf/common.sh@546 -- # IFS=, 00:17:02.883 15:59:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:02.883 "params": { 00:17:02.883 "name": "Nvme1", 00:17:02.883 "trtype": "tcp", 00:17:02.883 "traddr": "10.0.0.2", 00:17:02.883 "adrfam": "ipv4", 00:17:02.883 "trsvcid": "4420", 00:17:02.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.883 "hdgst": false, 00:17:02.883 "ddgst": false 00:17:02.883 }, 00:17:02.883 "method": "bdev_nvme_attach_controller" 00:17:02.883 }' 00:17:02.883 [2024-04-26 15:59:42.521019] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:02.883 [2024-04-26 15:59:42.521107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438119 ] 00:17:03.143 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.143 [2024-04-26 15:59:42.624511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.402 [2024-04-26 15:59:42.852552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.661 Running I/O for 10 seconds... 00:17:15.894 00:17:15.894 Latency(us) 00:17:15.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.894 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:15.894 Verification LBA range: start 0x0 length 0x1000 00:17:15.894 Nvme1n1 : 10.01 7063.69 55.19 0.00 0.00 18070.36 1175.37 42398.94 00:17:15.894 =================================================================================================================== 00:17:15.894 Total : 7063.69 55.19 0.00 0.00 18070.36 1175.37 42398.94 00:17:15.894 15:59:54 -- target/zcopy.sh@39 -- # perfpid=2440020 00:17:15.894 15:59:54 -- target/zcopy.sh@41 -- # xtrace_disable 00:17:15.894 15:59:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.894 15:59:54 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:15.894 15:59:54 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:15.894 15:59:54 -- nvmf/common.sh@521 -- # config=() 00:17:15.894 15:59:54 -- nvmf/common.sh@521 -- # local subsystem config 00:17:15.894 15:59:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:15.894 15:59:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:15.894 { 00:17:15.894 "params": { 00:17:15.894 "name": "Nvme$subsystem", 00:17:15.894 "trtype": "$TEST_TRANSPORT", 00:17:15.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.894 "adrfam": "ipv4", 00:17:15.894 "trsvcid": "$NVMF_PORT", 00:17:15.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.894 "hdgst": ${hdgst:-false}, 00:17:15.894 "ddgst": ${ddgst:-false} 00:17:15.895 }, 00:17:15.895 "method": "bdev_nvme_attach_controller" 00:17:15.895 } 00:17:15.895 EOF 00:17:15.895 )") 00:17:15.895 15:59:54 -- nvmf/common.sh@543 -- # cat 00:17:15.895 [2024-04-26 15:59:54.377845] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.377888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 15:59:54 -- nvmf/common.sh@545 -- # jq . 00:17:15.895 15:59:54 -- nvmf/common.sh@546 -- # IFS=, 00:17:15.895 15:59:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:15.895 "params": { 00:17:15.895 "name": "Nvme1", 00:17:15.895 "trtype": "tcp", 00:17:15.895 "traddr": "10.0.0.2", 00:17:15.895 "adrfam": "ipv4", 00:17:15.895 "trsvcid": "4420", 00:17:15.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.895 "hdgst": false, 00:17:15.895 "ddgst": false 00:17:15.895 }, 00:17:15.895 "method": "bdev_nvme_attach_controller" 00:17:15.895 }' 00:17:15.895 [2024-04-26 15:59:54.385839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.385866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.393838] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.393862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.401872] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.401893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.409888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.409910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.417895] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.417914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.425933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.425952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.433949] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.433967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.441963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.441982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.443995] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:15.895 [2024-04-26 15:59:54.444096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440020 ] 00:17:15.895 [2024-04-26 15:59:54.449992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.450011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.458007] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.458025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.466051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.466078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.474056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.474082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.482076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.482096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.490107] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.490127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.498128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.498147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.895 [2024-04-26 15:59:54.506139] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.506159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.514174] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.514193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.522176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.522195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.530213] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.530232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.538232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.538251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.546242] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.546263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.548211] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.895 [2024-04-26 15:59:54.554280] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.554300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.562309] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.562329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.570309] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.570329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.578342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.578362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.586354] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.586374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.594382] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.594402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.602409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.602428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.895 [2024-04-26 15:59:54.610420] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.895 [2024-04-26 15:59:54.610439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.618448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.618466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.626468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.626487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.634490] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.634509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.642515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.642534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.650524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.650543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.658568] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.658586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.666578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.666597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.674595] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.674614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.682630] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.682649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.690647] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.690666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.698656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.698675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.706698] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.706716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.714705] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.714723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.722753] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.722773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.730755] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.730775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.738772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.738790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.746802] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.746822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.754837] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.754856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.762836] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.762855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.770866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.770885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.777116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.896 [2024-04-26 15:59:54.778881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.778900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.786916] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.786935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.794941] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.794963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.802947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.802966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.810982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.811000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.819013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.819032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.827030] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.827048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.835051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.835076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.843059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.843087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.851108] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.851127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.859122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.859142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.867136] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.867155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.875172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.875196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.883195] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.883215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.891205] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.891225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.899239] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.899259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.907249] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.896 [2024-04-26 15:59:54.907267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.896 [2024-04-26 15:59:54.915283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.915313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.923310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.923329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.931330] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.931348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.939361] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.939380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.947384] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.947402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.955382] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.955401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.963424] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.963443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.971437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.971456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.979468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.979486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.987491] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.987510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:54.995505] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:54.995528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.003540] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.003560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.011557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.011576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.019569] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.019588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.027605] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.027624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.035622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.035641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.043661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.043680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.051676] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.051696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.059681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.059699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.067714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.067734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.075738] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.075756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.083751] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.083770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.091781] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.091800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.099790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.099809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.107821] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.107840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.115844] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.115864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.123874] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.123893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.131888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.131906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.139916] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.139934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.147923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.147945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.155956] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.155975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.163972] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.163991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.172077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.172100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.180055] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.180081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.188066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.188093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.196120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.196141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.204122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.204141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.212132] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.897 [2024-04-26 15:59:55.212151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.897 [2024-04-26 15:59:55.220177] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.220196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.228175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.228194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.236231] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.236251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.244233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.244253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.252247] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.252268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.260283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.260303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.268310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.268330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.276322] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.276346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.284353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.284373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 Running I/O for 5 seconds... 00:17:15.898 [2024-04-26 15:59:55.296116] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.296141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.316505] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.316531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.326460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.326484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.335383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.335406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.344863] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.344887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.354286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.354311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.363378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.363402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.372449] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.372472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.381590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.381614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.390905] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.390934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.400264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.400288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.409728] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.409753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.419050] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.419080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.428409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.428433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.437718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.437742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.446826] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.446851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.455987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.456011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.465281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.465305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.474492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.474516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.483413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.483437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.492542] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.492566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.501763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.501788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.510988] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.511013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.520192] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.520216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.529117] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.529141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.538161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.538185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.547220] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.547244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.556474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.556498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.898 [2024-04-26 15:59:55.565622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.898 [2024-04-26 15:59:55.565647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.899 [2024-04-26 15:59:55.574819] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.899 [2024-04-26 15:59:55.574844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.584184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.584209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.593863] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.593888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.603154] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.603179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.612398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.612422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.621783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.621808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.631087] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.631111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.640210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.640234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.649509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.649534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.658667] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.658692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.668008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.668033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.677475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.677501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.686901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.686926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.696133] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.696158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.705523] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.705548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.714907] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.714933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.724013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.724037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.733251] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.733275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.742246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.742271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.751282] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.751306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.760406] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.760431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.769412] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.769435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.778578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.778603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.787890] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.787914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.797052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.797083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.806054] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.806085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.815431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.815455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.824763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.824788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.158 [2024-04-26 15:59:55.834014] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.158 [2024-04-26 15:59:55.834042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.842963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.842986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.852081] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.852105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.861223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.861248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.870280] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.870305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.879400] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.879424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.888465] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.888490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.897549] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.897574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.906525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.906549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.915879] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.915903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.925192] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.925217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.934229] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.934254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.943217] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.943241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.952461] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.952485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.961818] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.961843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.970959] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.970984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.980068] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.980102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.989142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.989167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:55.998496] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:55.998521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.007706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.007736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.016611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.016636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.025780] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.025805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.034040] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.034066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.043754] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.043779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.052776] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.052800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.061901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.061926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.070964] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.070989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.080287] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.080312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.089583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.089608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.418 [2024-04-26 15:59:56.098986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.418 [2024-04-26 15:59:56.099011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.108120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.108144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.118682] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.118705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.128553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.128578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.136945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.136969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.146337] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.146360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.155141] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.155164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.164650] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.164675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.173445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.173469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.185080] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.185107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.193328] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.193352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.203034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.203059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.211885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.211909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.221015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.221038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.230086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.230110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.241611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.241636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.252690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.252715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.263809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.263834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.273082] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.273105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.284062] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.284093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.294944] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.294969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.303068] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.303117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.312237] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.312261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.324619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.324643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.335236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.335260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.343432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.343455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.678 [2024-04-26 15:59:56.352720] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.678 [2024-04-26 15:59:56.352744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.361878] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.361902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.371547] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.371574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.382663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.382688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.393769] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.393793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.403580] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.403605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.412425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.412449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.421487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.421511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.430027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.430058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.439816] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.439840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.448832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.448856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.457790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.457813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.467397] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.467421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.476247] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.476271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.487367] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.487391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.495489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.495513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.506872] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.506896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.515657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.515681] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.524971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.524995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.534504] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.534528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.545135] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.545160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.554412] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.554436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.563515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.563538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.574525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.574550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.582717] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.582741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.593751] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.593775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.601869] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.601893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.938 [2024-04-26 15:59:56.612438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.938 [2024-04-26 15:59:56.612462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.624672] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.624697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.635368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.635392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.644662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.644685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.653110] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.653133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.663660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.663684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.673996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.674021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.682270] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.682294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.695378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.695403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.704958] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.704982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.714352] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.714376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.722823] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.722847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.732654] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.732677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.741664] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.741688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.751486] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.751511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.760009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.760032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.771192] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.771221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.779646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.779670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.789243] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.789267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.797671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.198 [2024-04-26 15:59:56.797693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.198 [2024-04-26 15:59:56.811599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.199 [2024-04-26 15:59:56.811624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.199 [2024-04-26 15:59:56.821985] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.199 [2024-04-26 15:59:56.822009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.199 [2024-04-26 15:59:56.830048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.199 [2024-04-26 15:59:56.830079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.199 [2024-04-26 15:59:56.841327] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.199 [2024-04-26 15:59:56.841351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.199 [2024-04-26 15:59:56.849442] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.199 [2024-04-26 15:59:56.849466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.199 [2024-04-26 15:59:56.859498] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.199 [2024-04-26 15:59:56.859522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.199 [2024-04-26 15:59:56.870431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.199 [2024-04-26 15:59:56.870454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.199 [2024-04-26 15:59:56.880127] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.199 [2024-04-26 15:59:56.880150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.889908] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.889932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.899002] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.899026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.908640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.908664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.917862] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.917886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.927928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.927952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.936903] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.936927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.948230] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.948253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.960880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.960904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.971867] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.971893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.980035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.980060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.989860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.989885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:56.999196] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:56.999220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:57.007885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:57.007909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:57.017859] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:57.017883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:57.031595] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:57.031620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:57.041554] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:57.041578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:57.050631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.458 [2024-04-26 15:59:57.050656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.458 [2024-04-26 15:59:57.059283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.059307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.459 [2024-04-26 15:59:57.069318] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.069354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.459 [2024-04-26 15:59:57.080172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.080197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.459 [2024-04-26 15:59:57.088383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.088408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.459 [2024-04-26 15:59:57.097982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.098007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.459 [2024-04-26 15:59:57.106975] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.107001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.459 [2024-04-26 15:59:57.116023] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.116047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.459 [2024-04-26 15:59:57.125510] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.125534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.459 [2024-04-26 15:59:57.134645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.459 [2024-04-26 15:59:57.134670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.144145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.144170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.153641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.153665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.163020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.163045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.172220] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.172245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.181405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.181430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.190631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.190655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.199753] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.199778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.209547] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.209572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.218638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.218663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.227768] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.227792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.236864] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.236888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.246001] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.246026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.255249] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.255273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.264342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.264366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.273525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.273549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.282665] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.282694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.291731] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.291755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.718 [2024-04-26 15:59:57.300800] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.718 [2024-04-26 15:59:57.300823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.309693] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.309719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.318789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.318813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.327946] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.327970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.337270] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.337294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.346582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.346606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.356222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.356246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.365509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.365534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.374998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.375023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.384107] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.384131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.719 [2024-04-26 15:59:57.393301] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.719 [2024-04-26 15:59:57.393325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.402638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.402663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.411739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.411763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.421059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.421091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.430305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.430341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.439245] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.439269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.448404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.448429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.457921] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.457950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.467210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.467235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.476439] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.476464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.485851] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.485875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.494972] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.494996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.504441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.504466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.513631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.513656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.522786] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.522810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.532004] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.532029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.541278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.541302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.550838] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.550861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.560154] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.560178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.569388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.569412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.578596] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.578619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.588020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.588044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.597346] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.597370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.606811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.606835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.616236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.616261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.625689] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.625713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.634867] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.634894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.644055] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.644087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.979 [2024-04-26 15:59:57.653285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.979 [2024-04-26 15:59:57.653309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.239 [2024-04-26 15:59:57.662600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.662624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.672007] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.672031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.681524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.681547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.690875] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.690899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.700168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.700193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.709273] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.709297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.718590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.718615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.727563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.727588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.736777] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.736801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.746144] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.746168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.755403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.755427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.764756] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.764780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.774188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.774212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.783267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.783291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.792689] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.792712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.801972] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.801996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.810983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.811012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.820286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.820311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.829284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.829307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.838354] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.838378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.847467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.847491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.856336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.856360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.865506] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.865529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.874792] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.874816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.883866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.883890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.893035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.893059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.902310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.902333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.911627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.911650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.240 [2024-04-26 15:59:57.920965] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.240 [2024-04-26 15:59:57.920989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:57.929961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:57.929986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:57.939029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:57.939053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:57.948101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:57.948125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:57.957096] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:57.957121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:57.970430] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:57.970455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:57.980055] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:57.980085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:57.989015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:57.989043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:57.999770] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:57.999794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.007551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.007575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.018968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.018993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.026964] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.026988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.040841] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.040866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.050799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.050833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.059342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.059366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.068680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.068703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.077209] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.077232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.086846] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.086870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.095600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.095624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.107195] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.107219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.115284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.115308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.128618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.128643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.138916] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.138941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.147228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.147253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.157059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.157091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.500 [2024-04-26 15:59:58.166090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.500 [2024-04-26 15:59:58.166114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.501 [2024-04-26 15:59:58.175371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.501 [2024-04-26 15:59:58.175395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.188955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.188980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.198771] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.198795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.207068] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.207098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.221026] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.221051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.231477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.231503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.239314] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.239337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.250528] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.250551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.260622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.260647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.268904] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.268928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.283653] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.283678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.293297] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.293322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.301568] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.301592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.310984] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.311007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.320509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.320532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.329006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.329029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.342708] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.342733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.354035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.354060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.362059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.362091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.372971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.372995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.382945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.382970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.392471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.392495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.401278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.401302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.410835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.410859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.421150] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.421174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.429509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.429532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:18.761 [2024-04-26 15:59:58.440991] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:18.761 [2024-04-26 15:59:58.441016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.449264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.449288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.458956] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.458981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.467876] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.467901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.477134] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.477159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.486372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.486397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.495631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.495656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.504397] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.504421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.514179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.514205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.522928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.522952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.538703] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.538728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.547727] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.547752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.558968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.558993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.567380] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.567405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.577426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.577451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.586624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.586648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.595988] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.596013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.605223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.605247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.614525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.614548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.623390] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.021 [2024-04-26 15:59:58.623414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.021 [2024-04-26 15:59:58.632719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.022 [2024-04-26 15:59:58.632742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.022 [2024-04-26 15:59:58.642065] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.022 [2024-04-26 15:59:58.642114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.022 [2024-04-26 15:59:58.651295] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.022 [2024-04-26 15:59:58.651319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.022 [2024-04-26 15:59:58.660471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.022 [2024-04-26 15:59:58.660494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.022 [2024-04-26 15:59:58.669701] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.022 [2024-04-26 15:59:58.669725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.022 [2024-04-26 15:59:58.678858] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.022 [2024-04-26 15:59:58.678882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.022 [2024-04-26 15:59:58.687883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.022 [2024-04-26 15:59:58.687906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.022 [2024-04-26 15:59:58.697097] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.022 [2024-04-26 15:59:58.697121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.706247] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.706271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.715431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.715455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.724720] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.724748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.734164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.734188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.743586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.743610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.753023] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.753047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.762317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.762342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.771671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.771696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.780649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.780673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.789660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.789685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.798591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.798615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.807773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.807797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.816811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.816836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.825933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.825957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.834452] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.834477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.843953] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.843977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.853256] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.853280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.862283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.862307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.871471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.871495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.881434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.881458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.892018] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.892042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.903942] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.903969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.914436] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.914461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.923801] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.923825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.932238] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.932262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.941963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.941988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.951415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.951439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.282 [2024-04-26 15:59:58.960348] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.282 [2024-04-26 15:59:58.960372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:58.969841] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:58.969865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:58.979262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:58.979286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:58.988095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:58.988120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:58.997989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:58.998013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.009154] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.009179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.019077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.019101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.026864] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.026888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.038318] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.038343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.049639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.049663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.058175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.058198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.067076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.067102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.076262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.076285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.085467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.085495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.094625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.094650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.103726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.103750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.112984] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.113008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.122210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.122234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.131370] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.131393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.140758] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.140782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.150027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.150051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.159383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.159407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.168899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.168923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.178132] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.178156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.542 [2024-04-26 15:59:59.187457] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.542 [2024-04-26 15:59:59.187481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.543 [2024-04-26 15:59:59.196594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.543 [2024-04-26 15:59:59.196619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.543 [2024-04-26 15:59:59.205578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.543 [2024-04-26 15:59:59.205602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.543 [2024-04-26 15:59:59.214654] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.543 [2024-04-26 15:59:59.214678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.543 [2024-04-26 15:59:59.224007] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.543 [2024-04-26 15:59:59.224031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.233120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.233144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.242500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.242525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.251648] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.251672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.260793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.260821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.270008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.270033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.279470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.279494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.288728] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.288752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.298172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.298197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.307339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.307363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.316405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.316429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.325513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.325538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.334575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.334600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.343748] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.343772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.353552] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.353575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.365679] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.365702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.374312] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.374335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.383928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.383953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.393232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.393256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.402471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.402495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.411892] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.411917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.421153] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.421177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.430408] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.430432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.439648] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.439676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.448930] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.448954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.458122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.458145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.467175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.467199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:19.803 [2024-04-26 15:59:59.476248] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:19.803 [2024-04-26 15:59:59.476272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.485529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.485554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.494880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.494905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.503944] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.503969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.513253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.513276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.522484] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.522508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.531581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.531605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.540553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.540577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.549636] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.549660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.558650] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.558674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.567798] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.063 [2024-04-26 15:59:59.567822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.063 [2024-04-26 15:59:59.577103] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.577128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.586190] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.586214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.595128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.595153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.604054] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.604084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.613089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.613129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.622585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.622609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.631670] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.631695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.640885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.640909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.650191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.650215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.659303] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.659327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.668151] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.668174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.677294] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.677318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.686493] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.686517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.695783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.695806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.704884] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.704907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.714138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.714170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.723228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.723252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.732557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.732581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.064 [2024-04-26 15:59:59.741856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.064 [2024-04-26 15:59:59.741880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.751450] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.751474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.761186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.761210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.770199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.770223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.779281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.779305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.788833] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.788856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.798199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.798223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.807868] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.807892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.817051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.817081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.825925] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.825949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.835381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.835404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.844366] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.844390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.854015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.854039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.863219] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.863244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.874082] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.874106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.882255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.882279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.893341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.893367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.902254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.902278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.911385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.911409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.920398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.920422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.930787] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.930811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.938923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.938946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.948807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.948832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.958066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.958096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.966922] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.966946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.975989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.976013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.985099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.985123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 15:59:59.994357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 15:59:59.994381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.325 [2024-04-26 16:00:00.005087] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.325 [2024-04-26 16:00:00.005239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.017348] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.017375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.026377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.026407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.037214] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.037243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.048328] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.048361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.056561] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.056586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.068738] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.068764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.079810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.079835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.088274] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.088299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.099803] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.099828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.111322] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.111347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.119747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.119772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.129587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.129612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.139146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.139171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.148688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.148718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.158098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.158124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.167573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.167597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.176853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.176877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.186353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.186379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.195724] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.195749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.205050] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.205084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.214474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.214500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.223908] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.223933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.233409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.233434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.242805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.242830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.252444] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.252469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.586 [2024-04-26 16:00:00.261339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.586 [2024-04-26 16:00:00.261365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.271108] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.271133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.282275] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.282301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.290809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.290834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.300733] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.300760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.309399] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.309424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 00:17:20.846 Latency(us) 00:17:20.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.846 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:20.846 Nvme1n1 : 5.01 13538.63 105.77 0.00 0.00 9443.65 2949.12 25074.64 00:17:20.846 =================================================================================================================== 00:17:20.846 Total : 13538.63 105.77 0.00 0.00 9443.65 2949.12 25074.64 00:17:20.846 [2024-04-26 16:00:00.315243] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.315264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.323269] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.323291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.331282] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.331303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.339299] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.339318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.347329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.347348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.355354] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.355373] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.363368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.363392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.371405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.371427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.379401] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.379421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.387434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.387455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.395453] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.395473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.403469] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.403488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.411500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.411519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.419520] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.419539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.427533] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.427552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.435567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.435586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.443604] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.443623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.451625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.451649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.459637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.459656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.467645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.467665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.475681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.475702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.483700] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.483721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.491713] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.491732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.499746] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.499766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.507759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.507778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.515789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.515809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:20.846 [2024-04-26 16:00:00.523812] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:20.846 [2024-04-26 16:00:00.523831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.531827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.531846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.539855] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.539874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.547887] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.547906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.555888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.555907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.563920] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.563938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.571931] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.571950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.579961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.579980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.587982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.588001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.595992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.596011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.604031] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.604053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.612051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.612077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.620062] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.620087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.628098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.628117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.636113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.636134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.644153] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.644172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.652172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.652192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.660166] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.660184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.668198] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.668216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.676227] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.676247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.684240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.684261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.692271] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.692290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.700278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.700297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.708316] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.708335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.716337] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.716356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.724347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.724366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.732377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.732396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.740410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.107 [2024-04-26 16:00:00.740429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.107 [2024-04-26 16:00:00.748418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.108 [2024-04-26 16:00:00.748438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.108 [2024-04-26 16:00:00.756450] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.108 [2024-04-26 16:00:00.756468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.108 [2024-04-26 16:00:00.764458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.108 [2024-04-26 16:00:00.764478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.108 [2024-04-26 16:00:00.772492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.108 [2024-04-26 16:00:00.772511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.108 [2024-04-26 16:00:00.780510] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.108 [2024-04-26 16:00:00.780529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.108 [2024-04-26 16:00:00.788524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.108 [2024-04-26 16:00:00.788543] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.796557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.796576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.804579] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.804598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.812588] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.812607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.820619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.820638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.828635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.828654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.836673] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.836692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.844685] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.844703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.852709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.852730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.860744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.860764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.868754] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.868773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.876763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.876782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.884797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.884816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.892810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.892828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.900842] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.900860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.908861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.908880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.916882] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.916903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.924909] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.924928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.932940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.932959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.940936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.940955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.948974] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.948993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.956986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.957005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.965021] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.965040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.973038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.973058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.981048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.981067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.989088] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.989107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:00.997105] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:00.997124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:01.005120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:01.005140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:01.013152] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:01.013171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:01.021264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:01.021285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:01.029207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:01.029227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:01.037222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:01.037253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.368 [2024-04-26 16:00:01.045253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.368 [2024-04-26 16:00:01.045272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.053262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.053281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.061291] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.061311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.069299] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.069318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.077329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.077349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.085340] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.085359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.093372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.093391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.101396] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.101415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.109406] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.109425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.117447] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.117466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.125472] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.125491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.133473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.133492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.141508] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.141527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.149523] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.149542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.157561] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.157580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.165582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.165601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.173587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.173606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.181621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.181640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.189645] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.189665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.197657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.197676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.205691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.205710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.213703] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.213724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.221742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.221762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.229752] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.229771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.237764] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.237783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.245796] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.245815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.253832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.253865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.261826] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.261845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.269866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.269885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.277872] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.277891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.285910] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.285929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.293928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.293947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.301947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.301968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.629 [2024-04-26 16:00:01.309984] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.629 [2024-04-26 16:00:01.310004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.889 [2024-04-26 16:00:01.318006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.889 [2024-04-26 16:00:01.318025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.889 [2024-04-26 16:00:01.326005] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.889 [2024-04-26 16:00:01.326024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.889 [2024-04-26 16:00:01.334038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.889 [2024-04-26 16:00:01.334058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.889 [2024-04-26 16:00:01.342055] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.889 [2024-04-26 16:00:01.342081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.889 [2024-04-26 16:00:01.350101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.889 [2024-04-26 16:00:01.350122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2440020) - No such process 00:17:21.889 16:00:01 -- target/zcopy.sh@49 -- # wait 2440020 00:17:21.889 16:00:01 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:21.889 16:00:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.889 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:17:21.889 16:00:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.889 16:00:01 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:21.889 16:00:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.889 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:17:21.889 delay0 00:17:21.889 16:00:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.889 16:00:01 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:21.889 16:00:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.889 16:00:01 -- common/autotest_common.sh@10 -- # set +x 00:17:21.889 16:00:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.889 16:00:01 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:21.889 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.889 [2024-04-26 16:00:01.505403] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:28.447 Initializing NVMe Controllers 00:17:28.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:28.447 Initialization complete. Launching workers. 00:17:28.447 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 84 00:17:28.447 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 369, failed to submit 35 00:17:28.447 success 151, unsuccess 218, failed 0 00:17:28.447 16:00:07 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:28.447 16:00:07 -- target/zcopy.sh@60 -- # nvmftestfini 00:17:28.447 16:00:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:28.447 16:00:07 -- nvmf/common.sh@117 -- # sync 00:17:28.447 16:00:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.447 16:00:07 -- nvmf/common.sh@120 -- # set +e 00:17:28.447 16:00:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.447 16:00:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.447 rmmod nvme_tcp 00:17:28.447 rmmod nvme_fabrics 00:17:28.447 rmmod nvme_keyring 00:17:28.447 16:00:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.447 16:00:07 -- nvmf/common.sh@124 -- # set -e 00:17:28.447 16:00:07 -- nvmf/common.sh@125 -- # return 0 00:17:28.447 16:00:07 -- nvmf/common.sh@478 -- # '[' -n 2437876 ']' 00:17:28.447 16:00:07 -- nvmf/common.sh@479 -- # killprocess 2437876 00:17:28.447 16:00:07 -- common/autotest_common.sh@936 -- # '[' -z 2437876 ']' 00:17:28.447 16:00:07 -- common/autotest_common.sh@940 -- # kill -0 2437876 00:17:28.447 16:00:07 -- common/autotest_common.sh@941 -- # uname 00:17:28.447 16:00:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:28.447 16:00:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2437876 00:17:28.447 16:00:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:28.447 16:00:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:28.447 16:00:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2437876' 00:17:28.447 killing process with pid 2437876 00:17:28.447 16:00:07 -- common/autotest_common.sh@955 -- # kill 2437876 00:17:28.447 16:00:07 -- common/autotest_common.sh@960 -- # wait 2437876 00:17:29.466 16:00:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:29.466 16:00:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:29.466 16:00:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:29.466 16:00:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.466 16:00:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:29.466 16:00:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.466 16:00:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.466 16:00:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.008 16:00:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:32.008 00:17:32.008 real 0m34.810s 00:17:32.008 user 0m48.737s 00:17:32.008 sys 0m10.026s 00:17:32.008 16:00:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:32.008 16:00:11 -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 ************************************ 00:17:32.008 END TEST nvmf_zcopy 00:17:32.008 ************************************ 00:17:32.008 16:00:11 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:32.008 16:00:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:32.008 16:00:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.008 16:00:11 -- common/autotest_common.sh@10 -- # set +x 00:17:32.008 ************************************ 00:17:32.008 START TEST nvmf_nmic 00:17:32.008 ************************************ 00:17:32.008 16:00:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:32.008 * Looking for test storage... 00:17:32.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.008 16:00:11 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.008 16:00:11 -- nvmf/common.sh@7 -- # uname -s 00:17:32.008 16:00:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.008 16:00:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.008 16:00:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.008 16:00:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.008 16:00:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.008 16:00:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.008 16:00:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.008 16:00:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.008 16:00:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.008 16:00:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.008 16:00:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.008 16:00:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.008 16:00:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.008 16:00:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.008 16:00:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.008 16:00:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.008 16:00:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.008 16:00:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.008 16:00:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.008 16:00:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.008 16:00:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.008 16:00:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.008 16:00:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.008 16:00:11 -- paths/export.sh@5 -- # export PATH 00:17:32.008 16:00:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.008 16:00:11 -- nvmf/common.sh@47 -- # : 0 00:17:32.008 16:00:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.008 16:00:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.008 16:00:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.008 16:00:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.008 16:00:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.008 16:00:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.008 16:00:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.008 16:00:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.008 16:00:11 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:32.008 16:00:11 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:32.008 16:00:11 -- target/nmic.sh@14 -- # nvmftestinit 00:17:32.008 16:00:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:32.008 16:00:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.008 16:00:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:32.008 16:00:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:32.008 16:00:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:32.008 16:00:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.008 16:00:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.008 16:00:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.008 16:00:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:32.008 16:00:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:32.008 16:00:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.008 16:00:11 -- common/autotest_common.sh@10 -- # set +x 00:17:37.275 16:00:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:37.275 16:00:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.275 16:00:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.275 16:00:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.275 16:00:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.275 16:00:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.275 16:00:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.275 16:00:16 -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.275 16:00:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.275 16:00:16 -- nvmf/common.sh@296 -- # e810=() 00:17:37.275 16:00:16 -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.275 16:00:16 -- nvmf/common.sh@297 -- # x722=() 00:17:37.275 16:00:16 -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.275 16:00:16 -- nvmf/common.sh@298 -- # mlx=() 00:17:37.275 16:00:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.275 16:00:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.275 16:00:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.275 16:00:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.275 16:00:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.275 16:00:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.275 16:00:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:37.275 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:37.275 16:00:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.275 16:00:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:37.275 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:37.275 16:00:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.275 16:00:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.275 16:00:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.275 16:00:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:37.275 16:00:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.275 16:00:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:37.275 Found net devices under 0000:86:00.0: cvl_0_0 00:17:37.275 16:00:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.275 16:00:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.275 16:00:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.275 16:00:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:37.275 16:00:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.275 16:00:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:37.275 Found net devices under 0000:86:00.1: cvl_0_1 00:17:37.275 16:00:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.275 16:00:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:37.275 16:00:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:37.275 16:00:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:37.275 16:00:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.275 16:00:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.275 16:00:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.275 16:00:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.275 16:00:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.275 16:00:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.275 16:00:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.275 16:00:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.275 16:00:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.275 16:00:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.275 16:00:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.275 16:00:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.275 16:00:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.275 16:00:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.275 16:00:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.275 16:00:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:37.275 16:00:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.275 16:00:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.275 16:00:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.275 16:00:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:37.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:17:37.275 00:17:37.275 --- 10.0.0.2 ping statistics --- 00:17:37.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.275 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:17:37.275 16:00:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:17:37.275 00:17:37.275 --- 10.0.0.1 ping statistics --- 00:17:37.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.275 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:37.275 16:00:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.275 16:00:16 -- nvmf/common.sh@411 -- # return 0 00:17:37.275 16:00:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:37.275 16:00:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.275 16:00:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:37.275 16:00:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.275 16:00:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:37.275 16:00:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:37.275 16:00:16 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:37.275 16:00:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:37.275 16:00:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:37.275 16:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:37.275 16:00:16 -- nvmf/common.sh@470 -- # nvmfpid=2446296 00:17:37.275 16:00:16 -- nvmf/common.sh@471 -- # waitforlisten 2446296 00:17:37.275 16:00:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:37.275 16:00:16 -- common/autotest_common.sh@817 -- # '[' -z 2446296 ']' 00:17:37.275 16:00:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.275 16:00:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.275 16:00:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.275 16:00:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.275 16:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:37.275 [2024-04-26 16:00:16.667722] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:37.275 [2024-04-26 16:00:16.667808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.275 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.275 [2024-04-26 16:00:16.775712] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.534 [2024-04-26 16:00:16.999506] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.534 [2024-04-26 16:00:16.999553] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.534 [2024-04-26 16:00:16.999564] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.534 [2024-04-26 16:00:16.999575] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.534 [2024-04-26 16:00:16.999583] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.534 [2024-04-26 16:00:16.999657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.534 [2024-04-26 16:00:16.999733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.534 [2024-04-26 16:00:16.999819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.534 [2024-04-26 16:00:16.999828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.793 16:00:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:37.793 16:00:17 -- common/autotest_common.sh@850 -- # return 0 00:17:37.793 16:00:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:37.793 16:00:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:37.793 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 16:00:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.053 16:00:17 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 [2024-04-26 16:00:17.485530] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 Malloc0 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 [2024-04-26 16:00:17.606551] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:38.053 test case1: single bdev can't be used in multiple subsystems 00:17:38.053 16:00:17 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@28 -- # nmic_status=0 00:17:38.053 16:00:17 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 [2024-04-26 16:00:17.630383] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:38.053 [2024-04-26 16:00:17.630412] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:38.053 [2024-04-26 16:00:17.630426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:38.053 request: 00:17:38.053 { 00:17:38.053 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:38.053 "namespace": { 00:17:38.053 "bdev_name": "Malloc0", 00:17:38.053 "no_auto_visible": false 00:17:38.053 }, 00:17:38.053 "method": "nvmf_subsystem_add_ns", 00:17:38.053 "req_id": 1 00:17:38.053 } 00:17:38.053 Got JSON-RPC error response 00:17:38.053 response: 00:17:38.053 { 00:17:38.053 "code": -32602, 00:17:38.053 "message": "Invalid parameters" 00:17:38.053 } 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@29 -- # nmic_status=1 00:17:38.053 16:00:17 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:38.053 16:00:17 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:38.053 Adding namespace failed - expected result. 00:17:38.053 16:00:17 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:38.053 test case2: host connect to nvmf target in multiple paths 00:17:38.053 16:00:17 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:38.053 16:00:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.053 16:00:17 -- common/autotest_common.sh@10 -- # set +x 00:17:38.053 [2024-04-26 16:00:17.642563] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:38.053 16:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.053 16:00:17 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:39.434 16:00:18 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:40.811 16:00:20 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:40.811 16:00:20 -- common/autotest_common.sh@1184 -- # local i=0 00:17:40.811 16:00:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.811 16:00:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:40.811 16:00:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:42.713 16:00:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:42.713 16:00:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:42.713 16:00:22 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.713 16:00:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:42.713 16:00:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.713 16:00:22 -- common/autotest_common.sh@1194 -- # return 0 00:17:42.713 16:00:22 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:42.713 [global] 00:17:42.713 thread=1 00:17:42.713 invalidate=1 00:17:42.713 rw=write 00:17:42.713 time_based=1 00:17:42.713 runtime=1 00:17:42.713 ioengine=libaio 00:17:42.713 direct=1 00:17:42.713 bs=4096 00:17:42.713 iodepth=1 00:17:42.713 norandommap=0 00:17:42.713 numjobs=1 00:17:42.713 00:17:42.713 verify_dump=1 00:17:42.713 verify_backlog=512 00:17:42.713 verify_state_save=0 00:17:42.713 do_verify=1 00:17:42.713 verify=crc32c-intel 00:17:42.713 [job0] 00:17:42.713 filename=/dev/nvme0n1 00:17:42.713 Could not set queue depth (nvme0n1) 00:17:42.971 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:42.971 fio-3.35 00:17:42.971 Starting 1 thread 00:17:43.908 00:17:43.908 job0: (groupid=0, jobs=1): err= 0: pid=2447372: Fri Apr 26 16:00:23 2024 00:17:43.908 read: IOPS=502, BW=2010KiB/s (2058kB/s)(2084KiB/1037msec) 00:17:43.908 slat (nsec): min=6556, max=37297, avg=12101.54, stdev=7315.70 00:17:43.908 clat (usec): min=404, max=42050, avg=1324.12, stdev=5289.84 00:17:43.908 lat (usec): min=411, max=42073, avg=1336.22, stdev=5291.17 00:17:43.908 clat percentiles (usec): 00:17:43.908 | 1.00th=[ 437], 5.00th=[ 490], 10.00th=[ 515], 20.00th=[ 545], 00:17:43.909 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 603], 00:17:43.909 | 70.00th=[ 685], 80.00th=[ 734], 90.00th=[ 791], 95.00th=[ 881], 00:17:43.909 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:17:43.909 | 99.99th=[42206] 00:17:43.909 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:17:43.909 slat (usec): min=9, max=27638, avg=37.91, stdev=863.39 00:17:43.909 clat (usec): min=190, max=760, avg=290.07, stdev=79.02 00:17:43.909 lat (usec): min=200, max=28273, avg=327.98, stdev=877.70 00:17:43.909 clat percentiles (usec): 00:17:43.909 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 219], 00:17:43.909 | 30.00th=[ 227], 40.00th=[ 251], 50.00th=[ 277], 60.00th=[ 289], 00:17:43.909 | 70.00th=[ 302], 80.00th=[ 371], 90.00th=[ 412], 95.00th=[ 424], 00:17:43.909 | 99.00th=[ 482], 99.50th=[ 537], 99.90th=[ 685], 99.95th=[ 758], 00:17:43.909 | 99.99th=[ 758] 00:17:43.909 bw ( KiB/s): min= 3880, max= 4312, per=100.00%, avg=4096.00, stdev=305.47, samples=2 00:17:43.909 iops : min= 970, max= 1078, avg=1024.00, stdev=76.37, samples=2 00:17:43.909 lat (usec) : 250=26.34%, 500=41.49%, 750=26.54%, 1000=4.79% 00:17:43.909 lat (msec) : 2=0.26%, 50=0.58% 00:17:43.909 cpu : usr=0.87%, sys=1.74%, ctx=1549, majf=0, minf=2 00:17:43.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.909 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.909 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.909 00:17:43.909 Run status group 0 (all jobs): 00:17:43.909 READ: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2084KiB (2134kB), run=1037-1037msec 00:17:43.909 WRITE: bw=3950KiB/s (4045kB/s), 3950KiB/s-3950KiB/s (4045kB/s-4045kB/s), io=4096KiB (4194kB), run=1037-1037msec 00:17:43.909 00:17:43.909 Disk stats (read/write): 00:17:43.909 nvme0n1: ios=542/1024, merge=0/0, ticks=1488/287, in_queue=1775, util=98.70% 00:17:43.909 16:00:23 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:44.476 16:00:24 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:44.476 16:00:24 -- common/autotest_common.sh@1205 -- # local i=0 00:17:44.476 16:00:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:44.476 16:00:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.476 16:00:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:44.476 16:00:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.476 16:00:24 -- common/autotest_common.sh@1217 -- # return 0 00:17:44.476 16:00:24 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:44.476 16:00:24 -- target/nmic.sh@53 -- # nvmftestfini 00:17:44.476 16:00:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:44.476 16:00:24 -- nvmf/common.sh@117 -- # sync 00:17:44.476 16:00:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:44.476 16:00:24 -- nvmf/common.sh@120 -- # set +e 00:17:44.476 16:00:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:44.476 16:00:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:44.476 rmmod nvme_tcp 00:17:44.476 rmmod nvme_fabrics 00:17:44.734 rmmod nvme_keyring 00:17:44.734 16:00:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:44.734 16:00:24 -- nvmf/common.sh@124 -- # set -e 00:17:44.734 16:00:24 -- nvmf/common.sh@125 -- # return 0 00:17:44.734 16:00:24 -- nvmf/common.sh@478 -- # '[' -n 2446296 ']' 00:17:44.734 16:00:24 -- nvmf/common.sh@479 -- # killprocess 2446296 00:17:44.734 16:00:24 -- common/autotest_common.sh@936 -- # '[' -z 2446296 ']' 00:17:44.734 16:00:24 -- common/autotest_common.sh@940 -- # kill -0 2446296 00:17:44.734 16:00:24 -- common/autotest_common.sh@941 -- # uname 00:17:44.734 16:00:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.734 16:00:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2446296 00:17:44.734 16:00:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:44.734 16:00:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:44.734 16:00:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2446296' 00:17:44.734 killing process with pid 2446296 00:17:44.734 16:00:24 -- common/autotest_common.sh@955 -- # kill 2446296 00:17:44.734 16:00:24 -- common/autotest_common.sh@960 -- # wait 2446296 00:17:46.110 16:00:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:46.110 16:00:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:46.110 16:00:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:46.110 16:00:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:46.110 16:00:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:46.110 16:00:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.110 16:00:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.110 16:00:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.643 16:00:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:48.643 00:17:48.643 real 0m16.453s 00:17:48.643 user 0m39.768s 00:17:48.643 sys 0m4.815s 00:17:48.643 16:00:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:48.643 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:48.643 ************************************ 00:17:48.643 END TEST nvmf_nmic 00:17:48.643 ************************************ 00:17:48.643 16:00:27 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:48.643 16:00:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:48.643 16:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:48.643 16:00:27 -- common/autotest_common.sh@10 -- # set +x 00:17:48.643 ************************************ 00:17:48.643 START TEST nvmf_fio_target 00:17:48.643 ************************************ 00:17:48.643 16:00:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:48.643 * Looking for test storage... 00:17:48.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.643 16:00:28 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.643 16:00:28 -- nvmf/common.sh@7 -- # uname -s 00:17:48.643 16:00:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.643 16:00:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.643 16:00:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.643 16:00:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.643 16:00:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.643 16:00:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.643 16:00:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.643 16:00:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.643 16:00:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.643 16:00:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.643 16:00:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.643 16:00:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.643 16:00:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.643 16:00:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.643 16:00:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.643 16:00:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.643 16:00:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.643 16:00:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.643 16:00:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.643 16:00:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.643 16:00:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.643 16:00:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.643 16:00:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.643 16:00:28 -- paths/export.sh@5 -- # export PATH 00:17:48.643 16:00:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.643 16:00:28 -- nvmf/common.sh@47 -- # : 0 00:17:48.643 16:00:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.643 16:00:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.643 16:00:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.643 16:00:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.643 16:00:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.643 16:00:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.643 16:00:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.643 16:00:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.643 16:00:28 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.643 16:00:28 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.643 16:00:28 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.643 16:00:28 -- target/fio.sh@16 -- # nvmftestinit 00:17:48.643 16:00:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:48.643 16:00:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.643 16:00:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:48.643 16:00:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:48.643 16:00:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:48.643 16:00:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.643 16:00:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.643 16:00:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.643 16:00:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:48.643 16:00:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:48.643 16:00:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:48.643 16:00:28 -- common/autotest_common.sh@10 -- # set +x 00:17:53.909 16:00:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:53.909 16:00:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:53.909 16:00:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:53.909 16:00:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:53.909 16:00:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:53.909 16:00:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:53.909 16:00:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:53.909 16:00:33 -- nvmf/common.sh@295 -- # net_devs=() 00:17:53.909 16:00:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:53.910 16:00:33 -- nvmf/common.sh@296 -- # e810=() 00:17:53.910 16:00:33 -- nvmf/common.sh@296 -- # local -ga e810 00:17:53.910 16:00:33 -- nvmf/common.sh@297 -- # x722=() 00:17:53.910 16:00:33 -- nvmf/common.sh@297 -- # local -ga x722 00:17:53.910 16:00:33 -- nvmf/common.sh@298 -- # mlx=() 00:17:53.910 16:00:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:53.910 16:00:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.910 16:00:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:53.910 16:00:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:53.910 16:00:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:53.910 16:00:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.910 16:00:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:53.910 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:53.910 16:00:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.910 16:00:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:53.910 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:53.910 16:00:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:53.910 16:00:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.910 16:00:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.910 16:00:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:53.910 16:00:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.910 16:00:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:53.910 Found net devices under 0000:86:00.0: cvl_0_0 00:17:53.910 16:00:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.910 16:00:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.910 16:00:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.910 16:00:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:53.910 16:00:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.910 16:00:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:53.910 Found net devices under 0000:86:00.1: cvl_0_1 00:17:53.910 16:00:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.910 16:00:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:53.910 16:00:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:53.910 16:00:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:53.910 16:00:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.910 16:00:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.910 16:00:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.910 16:00:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:53.910 16:00:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.910 16:00:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.910 16:00:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:53.910 16:00:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.910 16:00:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.910 16:00:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:53.910 16:00:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:53.910 16:00:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.910 16:00:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.910 16:00:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.910 16:00:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.910 16:00:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:53.910 16:00:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.910 16:00:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.910 16:00:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.910 16:00:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:53.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:17:53.910 00:17:53.910 --- 10.0.0.2 ping statistics --- 00:17:53.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.910 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:17:53.910 16:00:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:17:53.910 00:17:53.910 --- 10.0.0.1 ping statistics --- 00:17:53.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.910 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:17:53.910 16:00:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.910 16:00:33 -- nvmf/common.sh@411 -- # return 0 00:17:53.910 16:00:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:53.910 16:00:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.910 16:00:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:53.910 16:00:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.910 16:00:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:53.910 16:00:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:53.910 16:00:33 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:53.910 16:00:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:53.910 16:00:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:53.910 16:00:33 -- common/autotest_common.sh@10 -- # set +x 00:17:53.910 16:00:33 -- nvmf/common.sh@470 -- # nvmfpid=2451364 00:17:53.910 16:00:33 -- nvmf/common.sh@471 -- # waitforlisten 2451364 00:17:53.910 16:00:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.910 16:00:33 -- common/autotest_common.sh@817 -- # '[' -z 2451364 ']' 00:17:53.910 16:00:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.910 16:00:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:53.910 16:00:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.910 16:00:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:53.910 16:00:33 -- common/autotest_common.sh@10 -- # set +x 00:17:53.910 [2024-04-26 16:00:33.561583] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:53.910 [2024-04-26 16:00:33.561675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.168 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.168 [2024-04-26 16:00:33.672274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.427 [2024-04-26 16:00:33.892475] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.427 [2024-04-26 16:00:33.892517] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.428 [2024-04-26 16:00:33.892526] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.428 [2024-04-26 16:00:33.892536] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.428 [2024-04-26 16:00:33.892544] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.428 [2024-04-26 16:00:33.892616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.428 [2024-04-26 16:00:33.892693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.428 [2024-04-26 16:00:33.892749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.428 [2024-04-26 16:00:33.892757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.687 16:00:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:54.687 16:00:34 -- common/autotest_common.sh@850 -- # return 0 00:17:54.687 16:00:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:54.687 16:00:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:54.688 16:00:34 -- common/autotest_common.sh@10 -- # set +x 00:17:54.947 16:00:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.947 16:00:34 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:54.947 [2024-04-26 16:00:34.527763] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.947 16:00:34 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:55.221 16:00:34 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:55.222 16:00:34 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:55.491 16:00:35 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:55.491 16:00:35 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:55.750 16:00:35 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:55.750 16:00:35 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:56.008 16:00:35 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:56.008 16:00:35 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:56.266 16:00:35 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:56.524 16:00:36 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:56.524 16:00:36 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:56.783 16:00:36 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:56.783 16:00:36 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:57.042 16:00:36 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:57.042 16:00:36 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:57.301 16:00:36 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:57.301 16:00:36 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:57.301 16:00:36 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.560 16:00:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:57.560 16:00:37 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:57.819 16:00:37 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.819 [2024-04-26 16:00:37.491085] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.077 16:00:37 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:58.077 16:00:37 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:58.336 16:00:37 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.726 16:00:39 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:59.726 16:00:39 -- common/autotest_common.sh@1184 -- # local i=0 00:17:59.726 16:00:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.726 16:00:39 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:17:59.726 16:00:39 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:17:59.726 16:00:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:01.631 16:00:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:01.631 16:00:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:01.631 16:00:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.631 16:00:41 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:18:01.631 16:00:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.631 16:00:41 -- common/autotest_common.sh@1194 -- # return 0 00:18:01.631 16:00:41 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:01.631 [global] 00:18:01.631 thread=1 00:18:01.631 invalidate=1 00:18:01.631 rw=write 00:18:01.631 time_based=1 00:18:01.631 runtime=1 00:18:01.631 ioengine=libaio 00:18:01.631 direct=1 00:18:01.631 bs=4096 00:18:01.631 iodepth=1 00:18:01.631 norandommap=0 00:18:01.631 numjobs=1 00:18:01.631 00:18:01.631 verify_dump=1 00:18:01.631 verify_backlog=512 00:18:01.631 verify_state_save=0 00:18:01.631 do_verify=1 00:18:01.631 verify=crc32c-intel 00:18:01.631 [job0] 00:18:01.631 filename=/dev/nvme0n1 00:18:01.631 [job1] 00:18:01.631 filename=/dev/nvme0n2 00:18:01.631 [job2] 00:18:01.631 filename=/dev/nvme0n3 00:18:01.631 [job3] 00:18:01.631 filename=/dev/nvme0n4 00:18:01.631 Could not set queue depth (nvme0n1) 00:18:01.631 Could not set queue depth (nvme0n2) 00:18:01.631 Could not set queue depth (nvme0n3) 00:18:01.631 Could not set queue depth (nvme0n4) 00:18:01.889 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.889 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.889 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.889 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:01.889 fio-3.35 00:18:01.889 Starting 4 threads 00:18:03.267 00:18:03.267 job0: (groupid=0, jobs=1): err= 0: pid=2452933: Fri Apr 26 16:00:42 2024 00:18:03.267 read: IOPS=20, BW=81.2KiB/s (83.2kB/s)(84.0KiB/1034msec) 00:18:03.267 slat (nsec): min=10284, max=23721, avg=21417.29, stdev=2657.47 00:18:03.267 clat (usec): min=40802, max=42106, avg=41685.28, stdev=416.56 00:18:03.267 lat (usec): min=40813, max=42127, avg=41706.69, stdev=417.97 00:18:03.267 clat percentiles (usec): 00:18:03.267 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:03.267 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:18:03.267 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:03.267 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:03.267 | 99.99th=[42206] 00:18:03.267 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:18:03.267 slat (nsec): min=10259, max=40877, avg=11805.46, stdev=1819.33 00:18:03.267 clat (usec): min=196, max=755, avg=293.54, stdev=80.42 00:18:03.267 lat (usec): min=207, max=767, avg=305.35, stdev=81.15 00:18:03.267 clat percentiles (usec): 00:18:03.267 | 1.00th=[ 219], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 251], 00:18:03.267 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 262], 00:18:03.267 | 70.00th=[ 277], 80.00th=[ 326], 90.00th=[ 453], 95.00th=[ 465], 00:18:03.267 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 758], 99.95th=[ 758], 00:18:03.267 | 99.99th=[ 758] 00:18:03.267 bw ( KiB/s): min= 4096, max= 4096, per=36.23%, avg=4096.00, stdev= 0.00, samples=1 00:18:03.268 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:03.268 lat (usec) : 250=18.39%, 500=76.36%, 750=1.13%, 1000=0.19% 00:18:03.268 lat (msec) : 50=3.94% 00:18:03.268 cpu : usr=0.58%, sys=0.77%, ctx=533, majf=0, minf=1 00:18:03.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.268 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.268 job1: (groupid=0, jobs=1): err= 0: pid=2452934: Fri Apr 26 16:00:42 2024 00:18:03.268 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:18:03.268 slat (nsec): min=9662, max=23085, avg=21801.71, stdev=2830.06 00:18:03.268 clat (usec): min=41776, max=42074, avg=41957.46, stdev=72.89 00:18:03.268 lat (usec): min=41785, max=42096, avg=41979.26, stdev=74.38 00:18:03.268 clat percentiles (usec): 00:18:03.268 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:18:03.268 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:03.268 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:03.268 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:03.268 | 99.99th=[42206] 00:18:03.268 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:18:03.268 slat (nsec): min=9494, max=42367, avg=11012.23, stdev=2012.57 00:18:03.268 clat (usec): min=227, max=619, avg=287.60, stdev=61.05 00:18:03.268 lat (usec): min=237, max=661, avg=298.61, stdev=61.43 00:18:03.268 clat percentiles (usec): 00:18:03.268 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 251], 00:18:03.268 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:18:03.268 | 70.00th=[ 281], 80.00th=[ 318], 90.00th=[ 371], 95.00th=[ 461], 00:18:03.268 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 619], 99.95th=[ 619], 00:18:03.268 | 99.99th=[ 619] 00:18:03.268 bw ( KiB/s): min= 4096, max= 4096, per=36.23%, avg=4096.00, stdev= 0.00, samples=1 00:18:03.268 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:03.268 lat (usec) : 250=17.82%, 500=78.05%, 750=0.19% 00:18:03.268 lat (msec) : 50=3.94% 00:18:03.268 cpu : usr=0.19%, sys=0.68%, ctx=533, majf=0, minf=1 00:18:03.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.268 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.268 job2: (groupid=0, jobs=1): err= 0: pid=2452935: Fri Apr 26 16:00:42 2024 00:18:03.268 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:03.268 slat (nsec): min=5498, max=82143, avg=11426.24, stdev=7276.90 00:18:03.268 clat (usec): min=325, max=724, avg=505.34, stdev=109.34 00:18:03.268 lat (usec): min=332, max=749, avg=516.77, stdev=115.00 00:18:03.268 clat percentiles (usec): 00:18:03.268 | 1.00th=[ 347], 5.00th=[ 367], 10.00th=[ 396], 20.00th=[ 420], 00:18:03.268 | 30.00th=[ 441], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 474], 00:18:03.268 | 70.00th=[ 570], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 693], 00:18:03.268 | 99.00th=[ 717], 99.50th=[ 717], 99.90th=[ 725], 99.95th=[ 725], 00:18:03.268 | 99.99th=[ 725] 00:18:03.268 write: IOPS=1390, BW=5562KiB/s (5696kB/s)(5568KiB/1001msec); 0 zone resets 00:18:03.268 slat (usec): min=4, max=20398, avg=25.53, stdev=546.54 00:18:03.268 clat (usec): min=176, max=4254, avg=307.60, stdev=135.25 00:18:03.268 lat (usec): min=186, max=21217, avg=333.13, stdev=576.59 00:18:03.268 clat percentiles (usec): 00:18:03.268 | 1.00th=[ 190], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 245], 00:18:03.268 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 318], 00:18:03.268 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 416], 95.00th=[ 465], 00:18:03.268 | 99.00th=[ 578], 99.50th=[ 652], 99.90th=[ 824], 99.95th=[ 4228], 00:18:03.268 | 99.99th=[ 4228] 00:18:03.268 bw ( KiB/s): min= 4488, max= 4488, per=39.70%, avg=4488.00, stdev= 0.00, samples=1 00:18:03.268 iops : min= 1122, max= 1122, avg=1122.00, stdev= 0.00, samples=1 00:18:03.268 lat (usec) : 250=17.14%, 500=67.96%, 750=14.82%, 1000=0.04% 00:18:03.268 lat (msec) : 10=0.04% 00:18:03.268 cpu : usr=2.00%, sys=2.20%, ctx=2423, majf=0, minf=2 00:18:03.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.268 issued rwts: total=1024,1392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.268 job3: (groupid=0, jobs=1): err= 0: pid=2452936: Fri Apr 26 16:00:42 2024 00:18:03.268 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(100KiB/1031msec) 00:18:03.268 slat (nsec): min=8160, max=23472, avg=19391.12, stdev=5491.67 00:18:03.268 clat (usec): min=390, max=42018, avg=35220.44, stdev=15480.94 00:18:03.268 lat (usec): min=400, max=42040, avg=35239.83, stdev=15485.66 00:18:03.268 clat percentiles (usec): 00:18:03.268 | 1.00th=[ 392], 5.00th=[ 494], 10.00th=[ 498], 20.00th=[41157], 00:18:03.268 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:03.268 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:03.268 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:03.268 | 99.99th=[42206] 00:18:03.268 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:18:03.268 slat (nsec): min=10716, max=58393, avg=12773.80, stdev=3956.53 00:18:03.268 clat (usec): min=221, max=828, avg=276.34, stdev=57.60 00:18:03.268 lat (usec): min=234, max=850, avg=289.12, stdev=58.48 00:18:03.268 clat percentiles (usec): 00:18:03.268 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:18:03.268 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 260], 00:18:03.268 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 326], 95.00th=[ 383], 00:18:03.268 | 99.00th=[ 469], 99.50th=[ 627], 99.90th=[ 832], 99.95th=[ 832], 00:18:03.268 | 99.99th=[ 832] 00:18:03.268 bw ( KiB/s): min= 4096, max= 4096, per=36.23%, avg=4096.00, stdev= 0.00, samples=1 00:18:03.268 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:03.268 lat (usec) : 250=17.69%, 500=77.65%, 750=0.56%, 1000=0.19% 00:18:03.268 lat (msec) : 50=3.91% 00:18:03.268 cpu : usr=0.49%, sys=0.87%, ctx=537, majf=0, minf=1 00:18:03.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:03.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:03.268 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:03.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:03.268 00:18:03.268 Run status group 0 (all jobs): 00:18:03.268 READ: bw=4212KiB/s (4313kB/s), 81.1KiB/s-4092KiB/s (83.0kB/s-4190kB/s), io=4364KiB (4469kB), run=1001-1036msec 00:18:03.268 WRITE: bw=11.0MiB/s (11.6MB/s), 1977KiB/s-5562KiB/s (2024kB/s-5696kB/s), io=11.4MiB (12.0MB), run=1001-1036msec 00:18:03.268 00:18:03.268 Disk stats (read/write): 00:18:03.268 nvme0n1: ios=66/512, merge=0/0, ticks=733/149, in_queue=882, util=85.97% 00:18:03.268 nvme0n2: ios=45/512, merge=0/0, ticks=712/144, in_queue=856, util=86.28% 00:18:03.268 nvme0n3: ios=941/1024, merge=0/0, ticks=876/323, in_queue=1199, util=98.09% 00:18:03.268 nvme0n4: ios=48/512, merge=0/0, ticks=801/136, in_queue=937, util=95.26% 00:18:03.268 16:00:42 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:03.268 [global] 00:18:03.268 thread=1 00:18:03.268 invalidate=1 00:18:03.268 rw=randwrite 00:18:03.268 time_based=1 00:18:03.268 runtime=1 00:18:03.268 ioengine=libaio 00:18:03.268 direct=1 00:18:03.268 bs=4096 00:18:03.268 iodepth=1 00:18:03.268 norandommap=0 00:18:03.268 numjobs=1 00:18:03.268 00:18:03.268 verify_dump=1 00:18:03.268 verify_backlog=512 00:18:03.268 verify_state_save=0 00:18:03.268 do_verify=1 00:18:03.268 verify=crc32c-intel 00:18:03.268 [job0] 00:18:03.268 filename=/dev/nvme0n1 00:18:03.268 [job1] 00:18:03.268 filename=/dev/nvme0n2 00:18:03.268 [job2] 00:18:03.268 filename=/dev/nvme0n3 00:18:03.268 [job3] 00:18:03.268 filename=/dev/nvme0n4 00:18:03.268 Could not set queue depth (nvme0n1) 00:18:03.268 Could not set queue depth (nvme0n2) 00:18:03.268 Could not set queue depth (nvme0n3) 00:18:03.268 Could not set queue depth (nvme0n4) 00:18:03.527 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:03.527 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:03.527 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:03.527 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:03.527 fio-3.35 00:18:03.527 Starting 4 threads 00:18:04.919 00:18:04.919 job0: (groupid=0, jobs=1): err= 0: pid=2453311: Fri Apr 26 16:00:44 2024 00:18:04.919 read: IOPS=19, BW=79.0KiB/s (80.9kB/s)(80.0KiB/1013msec) 00:18:04.919 slat (nsec): min=10549, max=26167, avg=22052.60, stdev=3242.03 00:18:04.919 clat (usec): min=41044, max=42086, avg=41898.91, stdev=234.85 00:18:04.919 lat (usec): min=41068, max=42108, avg=41920.96, stdev=235.76 00:18:04.919 clat percentiles (usec): 00:18:04.919 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:04.919 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:04.919 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:04.919 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:04.919 | 99.99th=[42206] 00:18:04.919 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:18:04.919 slat (nsec): min=10867, max=45080, avg=12537.96, stdev=2437.88 00:18:04.919 clat (usec): min=266, max=706, avg=323.37, stdev=55.64 00:18:04.919 lat (usec): min=277, max=751, avg=335.91, stdev=56.45 00:18:04.919 clat percentiles (usec): 00:18:04.919 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 293], 00:18:04.919 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:18:04.919 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 404], 95.00th=[ 465], 00:18:04.919 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 709], 99.95th=[ 709], 00:18:04.919 | 99.99th=[ 709] 00:18:04.919 bw ( KiB/s): min= 4096, max= 4096, per=38.95%, avg=4096.00, stdev= 0.00, samples=1 00:18:04.919 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:04.919 lat (usec) : 500=94.92%, 750=1.32% 00:18:04.919 lat (msec) : 50=3.76% 00:18:04.919 cpu : usr=0.69%, sys=0.69%, ctx=533, majf=0, minf=1 00:18:04.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.919 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:04.919 job1: (groupid=0, jobs=1): err= 0: pid=2453312: Fri Apr 26 16:00:44 2024 00:18:04.919 read: IOPS=170, BW=681KiB/s (697kB/s)(704KiB/1034msec) 00:18:04.919 slat (nsec): min=6798, max=39430, avg=17926.81, stdev=6902.81 00:18:04.919 clat (usec): min=500, max=42079, avg=4863.89, stdev=12457.56 00:18:04.919 lat (usec): min=519, max=42102, avg=4881.82, stdev=12457.20 00:18:04.919 clat percentiles (usec): 00:18:04.919 | 1.00th=[ 515], 5.00th=[ 553], 10.00th=[ 594], 20.00th=[ 635], 00:18:04.919 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 701], 00:18:04.919 | 70.00th=[ 709], 80.00th=[ 717], 90.00th=[41157], 95.00th=[41681], 00:18:04.919 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:04.919 | 99.99th=[42206] 00:18:04.919 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:18:04.919 slat (nsec): min=9295, max=39622, avg=10871.31, stdev=1945.49 00:18:04.919 clat (usec): min=249, max=648, avg=324.52, stdev=52.37 00:18:04.919 lat (usec): min=259, max=687, avg=335.39, stdev=52.74 00:18:04.919 clat percentiles (usec): 00:18:04.919 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:18:04.920 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:18:04.920 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 404], 95.00th=[ 453], 00:18:04.920 | 99.00th=[ 498], 99.50th=[ 502], 99.90th=[ 652], 99.95th=[ 652], 00:18:04.920 | 99.99th=[ 652] 00:18:04.920 bw ( KiB/s): min= 4096, max= 4096, per=38.95%, avg=4096.00, stdev= 0.00, samples=1 00:18:04.920 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:04.920 lat (usec) : 250=0.29%, 500=73.69%, 750=22.82%, 1000=0.29% 00:18:04.920 lat (msec) : 2=0.29%, 50=2.62% 00:18:04.920 cpu : usr=0.29%, sys=0.97%, ctx=688, majf=0, minf=1 00:18:04.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.920 issued rwts: total=176,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:04.920 job2: (groupid=0, jobs=1): err= 0: pid=2453313: Fri Apr 26 16:00:44 2024 00:18:04.920 read: IOPS=205, BW=822KiB/s (842kB/s)(852KiB/1036msec) 00:18:04.920 slat (nsec): min=7061, max=52926, avg=21323.15, stdev=5521.80 00:18:04.920 clat (usec): min=647, max=42242, avg=3937.73, stdev=10846.14 00:18:04.920 lat (usec): min=670, max=42260, avg=3959.05, stdev=10845.72 00:18:04.920 clat percentiles (usec): 00:18:04.920 | 1.00th=[ 693], 5.00th=[ 725], 10.00th=[ 750], 20.00th=[ 791], 00:18:04.920 | 30.00th=[ 807], 40.00th=[ 824], 50.00th=[ 840], 60.00th=[ 873], 00:18:04.920 | 70.00th=[ 922], 80.00th=[ 988], 90.00th=[ 1029], 95.00th=[42206], 00:18:04.920 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:04.920 | 99.99th=[42206] 00:18:04.920 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:18:04.920 slat (usec): min=7, max=245, avg=11.83, stdev=13.78 00:18:04.920 clat (usec): min=237, max=982, avg=358.07, stdev=82.12 00:18:04.920 lat (usec): min=247, max=998, avg=369.90, stdev=84.07 00:18:04.920 clat percentiles (usec): 00:18:04.920 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 293], 00:18:04.920 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 347], 00:18:04.920 | 70.00th=[ 392], 80.00th=[ 441], 90.00th=[ 461], 95.00th=[ 474], 00:18:04.920 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 979], 99.95th=[ 979], 00:18:04.920 | 99.99th=[ 979] 00:18:04.920 bw ( KiB/s): min= 4096, max= 4096, per=38.95%, avg=4096.00, stdev= 0.00, samples=1 00:18:04.920 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:04.920 lat (usec) : 250=0.28%, 500=67.45%, 750=5.93%, 1000=21.52% 00:18:04.920 lat (msec) : 2=2.62%, 50=2.21% 00:18:04.920 cpu : usr=0.29%, sys=1.16%, ctx=726, majf=0, minf=1 00:18:04.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.920 issued rwts: total=213,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:04.920 job3: (groupid=0, jobs=1): err= 0: pid=2453314: Fri Apr 26 16:00:44 2024 00:18:04.920 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:04.920 slat (nsec): min=7325, max=52049, avg=9362.17, stdev=2716.53 00:18:04.920 clat (usec): min=349, max=2575, avg=611.16, stdev=125.24 00:18:04.920 lat (usec): min=357, max=2602, avg=620.52, stdev=126.04 00:18:04.920 clat percentiles (usec): 00:18:04.920 | 1.00th=[ 367], 5.00th=[ 523], 10.00th=[ 545], 20.00th=[ 562], 00:18:04.920 | 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 611], 00:18:04.920 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 717], 00:18:04.920 | 99.00th=[ 1172], 99.50th=[ 1500], 99.90th=[ 1647], 99.95th=[ 2573], 00:18:04.920 | 99.99th=[ 2573] 00:18:04.920 write: IOPS=1186, BW=4747KiB/s (4861kB/s)(4752KiB/1001msec); 0 zone resets 00:18:04.920 slat (nsec): min=10447, max=40552, avg=12081.52, stdev=2082.82 00:18:04.920 clat (usec): min=207, max=872, avg=288.90, stdev=64.57 00:18:04.920 lat (usec): min=223, max=908, avg=300.98, stdev=65.17 00:18:04.920 clat percentiles (usec): 00:18:04.920 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 237], 00:18:04.920 | 30.00th=[ 245], 40.00th=[ 260], 50.00th=[ 281], 60.00th=[ 289], 00:18:04.920 | 70.00th=[ 306], 80.00th=[ 330], 90.00th=[ 371], 95.00th=[ 416], 00:18:04.920 | 99.00th=[ 474], 99.50th=[ 545], 99.90th=[ 857], 99.95th=[ 873], 00:18:04.920 | 99.99th=[ 873] 00:18:04.920 bw ( KiB/s): min= 4096, max= 4096, per=38.95%, avg=4096.00, stdev= 0.00, samples=1 00:18:04.920 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:04.920 lat (usec) : 250=18.26%, 500=36.89%, 750=42.90%, 1000=1.31% 00:18:04.920 lat (msec) : 2=0.59%, 4=0.05% 00:18:04.920 cpu : usr=2.00%, sys=3.80%, ctx=2212, majf=0, minf=2 00:18:04.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:04.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.920 issued rwts: total=1024,1188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:04.920 00:18:04.920 Run status group 0 (all jobs): 00:18:04.920 READ: bw=5533KiB/s (5666kB/s), 79.0KiB/s-4092KiB/s (80.9kB/s-4190kB/s), io=5732KiB (5870kB), run=1001-1036msec 00:18:04.920 WRITE: bw=10.3MiB/s (10.8MB/s), 1977KiB/s-4747KiB/s (2024kB/s-4861kB/s), io=10.6MiB (11.2MB), run=1001-1036msec 00:18:04.920 00:18:04.920 Disk stats (read/write): 00:18:04.920 nvme0n1: ios=53/512, merge=0/0, ticks=1658/159, in_queue=1817, util=97.29% 00:18:04.920 nvme0n2: ios=209/512, merge=0/0, ticks=701/162, in_queue=863, util=86.97% 00:18:04.920 nvme0n3: ios=207/512, merge=0/0, ticks=578/179, in_queue=757, util=87.85% 00:18:04.920 nvme0n4: ios=819/1024, merge=0/0, ticks=497/279, in_queue=776, util=89.29% 00:18:04.920 16:00:44 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:04.920 [global] 00:18:04.920 thread=1 00:18:04.920 invalidate=1 00:18:04.920 rw=write 00:18:04.920 time_based=1 00:18:04.920 runtime=1 00:18:04.920 ioengine=libaio 00:18:04.920 direct=1 00:18:04.920 bs=4096 00:18:04.920 iodepth=128 00:18:04.920 norandommap=0 00:18:04.920 numjobs=1 00:18:04.920 00:18:04.920 verify_dump=1 00:18:04.920 verify_backlog=512 00:18:04.920 verify_state_save=0 00:18:04.920 do_verify=1 00:18:04.920 verify=crc32c-intel 00:18:04.920 [job0] 00:18:04.920 filename=/dev/nvme0n1 00:18:04.920 [job1] 00:18:04.920 filename=/dev/nvme0n2 00:18:04.920 [job2] 00:18:04.920 filename=/dev/nvme0n3 00:18:04.920 [job3] 00:18:04.920 filename=/dev/nvme0n4 00:18:04.920 Could not set queue depth (nvme0n1) 00:18:04.920 Could not set queue depth (nvme0n2) 00:18:04.920 Could not set queue depth (nvme0n3) 00:18:04.920 Could not set queue depth (nvme0n4) 00:18:05.178 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:05.178 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:05.178 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:05.178 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:05.178 fio-3.35 00:18:05.178 Starting 4 threads 00:18:06.591 00:18:06.591 job0: (groupid=0, jobs=1): err= 0: pid=2453678: Fri Apr 26 16:00:45 2024 00:18:06.591 read: IOPS=3396, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1004msec) 00:18:06.591 slat (nsec): min=1032, max=19143k, avg=156539.87, stdev=874869.24 00:18:06.591 clat (usec): min=562, max=79279, avg=19223.88, stdev=10589.42 00:18:06.591 lat (usec): min=4185, max=79282, avg=19380.42, stdev=10627.22 00:18:06.591 clat percentiles (usec): 00:18:06.591 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11207], 20.00th=[12125], 00:18:06.591 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13566], 60.00th=[18220], 00:18:06.591 | 70.00th=[22938], 80.00th=[26346], 90.00th=[32637], 95.00th=[41681], 00:18:06.591 | 99.00th=[57410], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:18:06.591 | 99.99th=[79168] 00:18:06.591 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:18:06.591 slat (nsec): min=1871, max=14730k, avg=126640.89, stdev=706827.37 00:18:06.591 clat (usec): min=7609, max=47103, avg=16960.05, stdev=6669.38 00:18:06.591 lat (usec): min=7618, max=47109, avg=17086.70, stdev=6679.90 00:18:06.591 clat percentiles (usec): 00:18:06.591 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[11076], 20.00th=[11731], 00:18:06.591 | 30.00th=[12780], 40.00th=[13698], 50.00th=[14746], 60.00th=[16909], 00:18:06.591 | 70.00th=[18744], 80.00th=[21627], 90.00th=[26084], 95.00th=[29754], 00:18:06.591 | 99.00th=[39584], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:18:06.591 | 99.99th=[46924] 00:18:06.591 bw ( KiB/s): min=12288, max=16384, per=24.91%, avg=14336.00, stdev=2896.31, samples=2 00:18:06.591 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:18:06.591 lat (usec) : 750=0.01% 00:18:06.591 lat (msec) : 10=3.97%, 20=64.40%, 50=30.74%, 100=0.87% 00:18:06.591 cpu : usr=1.69%, sys=2.79%, ctx=478, majf=0, minf=1 00:18:06.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.591 issued rwts: total=3410,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.591 job1: (groupid=0, jobs=1): err= 0: pid=2453679: Fri Apr 26 16:00:45 2024 00:18:06.591 read: IOPS=3071, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1009msec) 00:18:06.591 slat (nsec): min=1080, max=33521k, avg=163089.44, stdev=1388960.84 00:18:06.591 clat (usec): min=2966, max=83011, avg=23767.65, stdev=13366.79 00:18:06.591 lat (usec): min=2977, max=83021, avg=23930.74, stdev=13460.41 00:18:06.591 clat percentiles (usec): 00:18:06.591 | 1.00th=[ 4178], 5.00th=[10421], 10.00th=[11863], 20.00th=[12387], 00:18:06.591 | 30.00th=[14877], 40.00th=[17695], 50.00th=[20317], 60.00th=[22676], 00:18:06.591 | 70.00th=[26870], 80.00th=[32900], 90.00th=[42206], 95.00th=[52691], 00:18:06.591 | 99.00th=[69731], 99.50th=[69731], 99.90th=[70779], 99.95th=[79168], 00:18:06.591 | 99.99th=[83362] 00:18:06.591 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:18:06.591 slat (nsec): min=1950, max=20596k, avg=111856.30, stdev=1018458.24 00:18:06.591 clat (usec): min=352, max=54028, avg=15128.96, stdev=9015.49 00:18:06.591 lat (usec): min=359, max=54035, avg=15240.81, stdev=9110.67 00:18:06.591 clat percentiles (usec): 00:18:06.591 | 1.00th=[ 1811], 5.00th=[ 4178], 10.00th=[ 5735], 20.00th=[ 8225], 00:18:06.591 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[12911], 60.00th=[14222], 00:18:06.591 | 70.00th=[17433], 80.00th=[20841], 90.00th=[29230], 95.00th=[34341], 00:18:06.591 | 99.00th=[38536], 99.50th=[40633], 99.90th=[54264], 99.95th=[54264], 00:18:06.591 | 99.99th=[54264] 00:18:06.591 bw ( KiB/s): min=11152, max=16720, per=24.22%, avg=13936.00, stdev=3937.17, samples=2 00:18:06.591 iops : min= 2788, max= 4180, avg=3484.00, stdev=984.29, samples=2 00:18:06.591 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.07% 00:18:06.591 lat (msec) : 2=0.55%, 4=2.04%, 10=16.40%, 20=44.79%, 50=33.02% 00:18:06.591 lat (msec) : 100=3.04% 00:18:06.591 cpu : usr=2.38%, sys=2.58%, ctx=379, majf=0, minf=1 00:18:06.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.591 issued rwts: total=3099,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.591 job2: (groupid=0, jobs=1): err= 0: pid=2453680: Fri Apr 26 16:00:45 2024 00:18:06.591 read: IOPS=3885, BW=15.2MiB/s (15.9MB/s)(15.9MiB/1045msec) 00:18:06.591 slat (nsec): min=1139, max=18155k, avg=125142.16, stdev=883124.58 00:18:06.591 clat (usec): min=2033, max=62040, avg=18204.94, stdev=7827.96 00:18:06.591 lat (usec): min=4664, max=65981, avg=18330.08, stdev=7849.59 00:18:06.591 clat percentiles (usec): 00:18:06.591 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[12387], 20.00th=[13173], 00:18:06.591 | 30.00th=[14091], 40.00th=[15270], 50.00th=[16319], 60.00th=[17957], 00:18:06.591 | 70.00th=[19530], 80.00th=[21365], 90.00th=[23987], 95.00th=[29754], 00:18:06.591 | 99.00th=[59507], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:18:06.591 | 99.99th=[62129] 00:18:06.591 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:18:06.591 slat (usec): min=2, max=14670, avg=110.20, stdev=791.45 00:18:06.591 clat (usec): min=3619, max=28083, avg=14263.69, stdev=4221.91 00:18:06.591 lat (usec): min=3637, max=28101, avg=14373.89, stdev=4223.52 00:18:06.591 clat percentiles (usec): 00:18:06.591 | 1.00th=[ 5604], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[10159], 00:18:06.591 | 30.00th=[11600], 40.00th=[13304], 50.00th=[14091], 60.00th=[15401], 00:18:06.591 | 70.00th=[16581], 80.00th=[17957], 90.00th=[20055], 95.00th=[21103], 00:18:06.591 | 99.00th=[23725], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:18:06.591 | 99.99th=[28181] 00:18:06.591 bw ( KiB/s): min=14896, max=17872, per=28.47%, avg=16384.00, stdev=2104.35, samples=2 00:18:06.591 iops : min= 3724, max= 4468, avg=4096.00, stdev=526.09, samples=2 00:18:06.591 lat (msec) : 4=0.12%, 10=10.47%, 20=69.49%, 50=18.88%, 100=1.03% 00:18:06.591 cpu : usr=3.26%, sys=4.89%, ctx=352, majf=0, minf=1 00:18:06.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.591 issued rwts: total=4060,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.591 job3: (groupid=0, jobs=1): err= 0: pid=2453681: Fri Apr 26 16:00:45 2024 00:18:06.591 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:18:06.591 slat (nsec): min=1135, max=7161.5k, avg=127774.11, stdev=686548.13 00:18:06.591 clat (usec): min=9407, max=30152, avg=16826.17, stdev=3958.32 00:18:06.591 lat (usec): min=9420, max=30181, avg=16953.94, stdev=4008.83 00:18:06.591 clat percentiles (usec): 00:18:06.591 | 1.00th=[10421], 5.00th=[11863], 10.00th=[12387], 20.00th=[13304], 00:18:06.591 | 30.00th=[13960], 40.00th=[14877], 50.00th=[16450], 60.00th=[17171], 00:18:06.591 | 70.00th=[18482], 80.00th=[20579], 90.00th=[22938], 95.00th=[24249], 00:18:06.591 | 99.00th=[25560], 99.50th=[27395], 99.90th=[29492], 99.95th=[29754], 00:18:06.591 | 99.99th=[30278] 00:18:06.591 write: IOPS=3751, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1005msec); 0 zone resets 00:18:06.591 slat (nsec): min=1961, max=31059k, avg=138219.56, stdev=898370.35 00:18:06.591 clat (usec): min=3837, max=37653, avg=16512.03, stdev=4646.81 00:18:06.591 lat (usec): min=4488, max=48006, avg=16650.25, stdev=4736.13 00:18:06.591 clat percentiles (usec): 00:18:06.591 | 1.00th=[ 8717], 5.00th=[11338], 10.00th=[11863], 20.00th=[12387], 00:18:06.591 | 30.00th=[12911], 40.00th=[14353], 50.00th=[16188], 60.00th=[16581], 00:18:06.591 | 70.00th=[18220], 80.00th=[20055], 90.00th=[22676], 95.00th=[25560], 00:18:06.591 | 99.00th=[31589], 99.50th=[31851], 99.90th=[34866], 99.95th=[35914], 00:18:06.591 | 99.99th=[37487] 00:18:06.591 bw ( KiB/s): min=12288, max=16856, per=25.32%, avg=14572.00, stdev=3230.06, samples=2 00:18:06.591 iops : min= 3072, max= 4214, avg=3643.00, stdev=807.52, samples=2 00:18:06.591 lat (msec) : 4=0.01%, 10=1.26%, 20=76.52%, 50=22.21% 00:18:06.591 cpu : usr=2.69%, sys=4.28%, ctx=408, majf=0, minf=1 00:18:06.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:06.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.591 issued rwts: total=3584,3770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.591 00:18:06.591 Run status group 0 (all jobs): 00:18:06.591 READ: bw=52.9MiB/s (55.5MB/s), 12.0MiB/s-15.2MiB/s (12.6MB/s-15.9MB/s), io=55.3MiB (58.0MB), run=1004-1045msec 00:18:06.591 WRITE: bw=56.2MiB/s (58.9MB/s), 13.9MiB/s-15.3MiB/s (14.5MB/s-16.1MB/s), io=58.7MiB (61.6MB), run=1004-1045msec 00:18:06.591 00:18:06.591 Disk stats (read/write): 00:18:06.591 nvme0n1: ios=3091/3210, merge=0/0, ticks=15396/12829, in_queue=28225, util=85.87% 00:18:06.591 nvme0n2: ios=2609/2856, merge=0/0, ticks=46107/33971, in_queue=80078, util=91.08% 00:18:06.592 nvme0n3: ios=3386/3584, merge=0/0, ticks=51835/50761, in_queue=102596, util=93.77% 00:18:06.592 nvme0n4: ios=3131/3176, merge=0/0, ticks=17089/17141, in_queue=34230, util=95.08% 00:18:06.592 16:00:45 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:06.592 [global] 00:18:06.592 thread=1 00:18:06.592 invalidate=1 00:18:06.592 rw=randwrite 00:18:06.592 time_based=1 00:18:06.592 runtime=1 00:18:06.592 ioengine=libaio 00:18:06.592 direct=1 00:18:06.592 bs=4096 00:18:06.592 iodepth=128 00:18:06.592 norandommap=0 00:18:06.592 numjobs=1 00:18:06.592 00:18:06.592 verify_dump=1 00:18:06.592 verify_backlog=512 00:18:06.592 verify_state_save=0 00:18:06.592 do_verify=1 00:18:06.592 verify=crc32c-intel 00:18:06.592 [job0] 00:18:06.592 filename=/dev/nvme0n1 00:18:06.592 [job1] 00:18:06.592 filename=/dev/nvme0n2 00:18:06.592 [job2] 00:18:06.592 filename=/dev/nvme0n3 00:18:06.592 [job3] 00:18:06.592 filename=/dev/nvme0n4 00:18:06.592 Could not set queue depth (nvme0n1) 00:18:06.592 Could not set queue depth (nvme0n2) 00:18:06.592 Could not set queue depth (nvme0n3) 00:18:06.592 Could not set queue depth (nvme0n4) 00:18:06.849 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:06.849 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:06.849 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:06.849 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:06.849 fio-3.35 00:18:06.849 Starting 4 threads 00:18:08.221 00:18:08.221 job0: (groupid=0, jobs=1): err= 0: pid=2454055: Fri Apr 26 16:00:47 2024 00:18:08.221 read: IOPS=2492, BW=9970KiB/s (10.2MB/s)(9.80MiB/1007msec) 00:18:08.221 slat (nsec): min=1034, max=37745k, avg=174278.28, stdev=1362977.69 00:18:08.221 clat (usec): min=1200, max=93256, avg=23629.31, stdev=13392.12 00:18:08.221 lat (msec): min=6, max=101, avg=23.80, stdev=13.50 00:18:08.221 clat percentiles (usec): 00:18:08.221 | 1.00th=[ 7046], 5.00th=[10159], 10.00th=[10945], 20.00th=[13566], 00:18:08.221 | 30.00th=[16057], 40.00th=[19006], 50.00th=[21890], 60.00th=[22414], 00:18:08.221 | 70.00th=[25297], 80.00th=[29230], 90.00th=[36439], 95.00th=[63701], 00:18:08.221 | 99.00th=[71828], 99.50th=[80217], 99.90th=[92799], 99.95th=[92799], 00:18:08.221 | 99.99th=[92799] 00:18:08.221 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:18:08.221 slat (nsec): min=1751, max=11637k, avg=217084.57, stdev=1073964.75 00:18:08.221 clat (usec): min=3808, max=79042, avg=26249.01, stdev=18138.28 00:18:08.221 lat (usec): min=3816, max=79050, avg=26466.09, stdev=18259.66 00:18:08.221 clat percentiles (usec): 00:18:08.221 | 1.00th=[ 6128], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[12518], 00:18:08.221 | 30.00th=[15008], 40.00th=[16450], 50.00th=[19006], 60.00th=[22676], 00:18:08.221 | 70.00th=[26084], 80.00th=[43779], 90.00th=[58459], 95.00th=[66323], 00:18:08.221 | 99.00th=[73925], 99.50th=[76022], 99.90th=[77071], 99.95th=[79168], 00:18:08.221 | 99.99th=[79168] 00:18:08.221 bw ( KiB/s): min= 8192, max=12288, per=17.90%, avg=10240.00, stdev=2896.31, samples=2 00:18:08.221 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:18:08.221 lat (msec) : 2=0.02%, 4=0.08%, 10=5.74%, 20=42.21%, 50=40.57% 00:18:08.221 lat (msec) : 100=11.38% 00:18:08.221 cpu : usr=1.19%, sys=2.39%, ctx=308, majf=0, minf=1 00:18:08.221 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:08.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.221 issued rwts: total=2510,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.221 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.221 job1: (groupid=0, jobs=1): err= 0: pid=2454057: Fri Apr 26 16:00:47 2024 00:18:08.221 read: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1015msec) 00:18:08.221 slat (nsec): min=1339, max=10156k, avg=88648.27, stdev=634414.42 00:18:08.221 clat (usec): min=6212, max=31091, avg=11643.60, stdev=3418.34 00:18:08.221 lat (usec): min=6220, max=31119, avg=11732.25, stdev=3458.46 00:18:08.221 clat percentiles (usec): 00:18:08.221 | 1.00th=[ 6652], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 8979], 00:18:08.221 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11863], 00:18:08.221 | 70.00th=[12518], 80.00th=[13960], 90.00th=[15926], 95.00th=[17695], 00:18:08.221 | 99.00th=[21365], 99.50th=[29492], 99.90th=[29492], 99.95th=[29754], 00:18:08.222 | 99.99th=[31065] 00:18:08.222 write: IOPS=5279, BW=20.6MiB/s (21.6MB/s)(20.9MiB/1015msec); 0 zone resets 00:18:08.222 slat (usec): min=2, max=10260, avg=96.82, stdev=488.64 00:18:08.222 clat (usec): min=1047, max=29168, avg=12898.66, stdev=4171.89 00:18:08.222 lat (usec): min=1170, max=29174, avg=12995.49, stdev=4181.07 00:18:08.222 clat percentiles (usec): 00:18:08.222 | 1.00th=[ 4817], 5.00th=[ 6325], 10.00th=[ 7439], 20.00th=[ 9634], 00:18:08.222 | 30.00th=[10683], 40.00th=[11863], 50.00th=[12518], 60.00th=[13566], 00:18:08.222 | 70.00th=[14615], 80.00th=[16188], 90.00th=[17957], 95.00th=[20055], 00:18:08.222 | 99.00th=[25035], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:18:08.222 | 99.99th=[29230] 00:18:08.222 bw ( KiB/s): min=20624, max=21232, per=36.58%, avg=20928.00, stdev=429.92, samples=2 00:18:08.222 iops : min= 5156, max= 5308, avg=5232.00, stdev=107.48, samples=2 00:18:08.222 lat (msec) : 2=0.01%, 4=0.29%, 10=32.24%, 20=63.33%, 50=4.14% 00:18:08.222 cpu : usr=3.55%, sys=4.34%, ctx=688, majf=0, minf=1 00:18:08.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:08.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.222 issued rwts: total=5120,5359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.222 job2: (groupid=0, jobs=1): err= 0: pid=2454058: Fri Apr 26 16:00:47 2024 00:18:08.222 read: IOPS=2516, BW=9.83MiB/s (10.3MB/s)(9.90MiB/1007msec) 00:18:08.222 slat (nsec): min=1067, max=29007k, avg=212664.85, stdev=1629206.92 00:18:08.222 clat (usec): min=5780, max=74212, avg=27484.72, stdev=11262.92 00:18:08.222 lat (usec): min=7527, max=74263, avg=27697.38, stdev=11398.56 00:18:08.222 clat percentiles (usec): 00:18:08.222 | 1.00th=[ 7767], 5.00th=[11994], 10.00th=[14746], 20.00th=[19268], 00:18:08.222 | 30.00th=[21103], 40.00th=[22676], 50.00th=[24511], 60.00th=[28181], 00:18:08.222 | 70.00th=[31851], 80.00th=[37487], 90.00th=[44827], 95.00th=[51119], 00:18:08.222 | 99.00th=[54264], 99.50th=[54264], 99.90th=[67634], 99.95th=[67634], 00:18:08.222 | 99.99th=[73925] 00:18:08.222 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:18:08.222 slat (usec): min=2, max=19786, avg=172.22, stdev=1093.69 00:18:08.222 clat (usec): min=7926, max=54062, avg=22072.70, stdev=7910.02 00:18:08.222 lat (usec): min=7938, max=54141, avg=22244.92, stdev=7978.12 00:18:08.222 clat percentiles (usec): 00:18:08.222 | 1.00th=[ 9503], 5.00th=[11863], 10.00th=[12649], 20.00th=[15533], 00:18:08.222 | 30.00th=[17433], 40.00th=[19530], 50.00th=[21627], 60.00th=[22414], 00:18:08.222 | 70.00th=[25035], 80.00th=[27919], 90.00th=[32375], 95.00th=[37487], 00:18:08.222 | 99.00th=[48497], 99.50th=[48497], 99.90th=[50594], 99.95th=[53740], 00:18:08.222 | 99.99th=[54264] 00:18:08.222 bw ( KiB/s): min= 8944, max=11536, per=17.90%, avg=10240.00, stdev=1832.82, samples=2 00:18:08.222 iops : min= 2236, max= 2884, avg=2560.00, stdev=458.21, samples=2 00:18:08.222 lat (msec) : 10=2.49%, 20=31.82%, 50=62.92%, 100=2.77% 00:18:08.222 cpu : usr=1.39%, sys=3.18%, ctx=265, majf=0, minf=1 00:18:08.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:08.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.222 issued rwts: total=2534,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.222 job3: (groupid=0, jobs=1): err= 0: pid=2454059: Fri Apr 26 16:00:47 2024 00:18:08.222 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:18:08.222 slat (nsec): min=1074, max=25480k, avg=128765.58, stdev=1022905.96 00:18:08.222 clat (usec): min=2774, max=45730, avg=18453.00, stdev=6018.02 00:18:08.222 lat (usec): min=2780, max=45747, avg=18581.76, stdev=6076.42 00:18:08.222 clat percentiles (usec): 00:18:08.222 | 1.00th=[ 5145], 5.00th=[ 9634], 10.00th=[11731], 20.00th=[13829], 00:18:08.222 | 30.00th=[15139], 40.00th=[16909], 50.00th=[17695], 60.00th=[19268], 00:18:08.222 | 70.00th=[20841], 80.00th=[22676], 90.00th=[27132], 95.00th=[31065], 00:18:08.222 | 99.00th=[34866], 99.50th=[34866], 99.90th=[38011], 99.95th=[43254], 00:18:08.222 | 99.99th=[45876] 00:18:08.222 write: IOPS=4018, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1005msec); 0 zone resets 00:18:08.222 slat (nsec): min=1855, max=11150k, avg=110317.16, stdev=671642.94 00:18:08.222 clat (usec): min=738, max=46566, avg=15069.15, stdev=8073.99 00:18:08.222 lat (usec): min=794, max=46570, avg=15179.47, stdev=8121.79 00:18:08.222 clat percentiles (usec): 00:18:08.222 | 1.00th=[ 3294], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[ 9765], 00:18:08.222 | 30.00th=[10683], 40.00th=[11731], 50.00th=[13042], 60.00th=[14615], 00:18:08.222 | 70.00th=[16319], 80.00th=[18482], 90.00th=[24249], 95.00th=[34866], 00:18:08.222 | 99.00th=[44303], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:18:08.222 | 99.99th=[46400] 00:18:08.222 bw ( KiB/s): min=13952, max=17336, per=27.34%, avg=15644.00, stdev=2392.85, samples=2 00:18:08.222 iops : min= 3488, max= 4334, avg=3911.00, stdev=598.21, samples=2 00:18:08.222 lat (usec) : 750=0.01% 00:18:08.222 lat (msec) : 2=0.14%, 4=1.01%, 10=14.82%, 20=59.10%, 50=24.91% 00:18:08.222 cpu : usr=2.19%, sys=3.29%, ctx=453, majf=0, minf=1 00:18:08.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:08.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.222 issued rwts: total=3584,4039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.222 00:18:08.222 Run status group 0 (all jobs): 00:18:08.222 READ: bw=52.9MiB/s (55.5MB/s), 9970KiB/s-19.7MiB/s (10.2MB/s-20.7MB/s), io=53.7MiB (56.3MB), run=1005-1015msec 00:18:08.222 WRITE: bw=55.9MiB/s (58.6MB/s), 9.93MiB/s-20.6MiB/s (10.4MB/s-21.6MB/s), io=56.7MiB (59.5MB), run=1005-1015msec 00:18:08.222 00:18:08.222 Disk stats (read/write): 00:18:08.222 nvme0n1: ios=2026/2048, merge=0/0, ticks=22325/28943, in_queue=51268, util=87.27% 00:18:08.222 nvme0n2: ios=4333/4608, merge=0/0, ticks=47250/55687, in_queue=102937, util=91.73% 00:18:08.222 nvme0n3: ios=2099/2048, merge=0/0, ticks=27200/18003, in_queue=45203, util=96.96% 00:18:08.222 nvme0n4: ios=3072/3375, merge=0/0, ticks=51872/41680, in_queue=93552, util=89.41% 00:18:08.222 16:00:47 -- target/fio.sh@55 -- # sync 00:18:08.222 16:00:47 -- target/fio.sh@59 -- # fio_pid=2454290 00:18:08.222 16:00:47 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:08.222 16:00:47 -- target/fio.sh@61 -- # sleep 3 00:18:08.222 [global] 00:18:08.222 thread=1 00:18:08.222 invalidate=1 00:18:08.222 rw=read 00:18:08.222 time_based=1 00:18:08.222 runtime=10 00:18:08.222 ioengine=libaio 00:18:08.222 direct=1 00:18:08.222 bs=4096 00:18:08.222 iodepth=1 00:18:08.222 norandommap=1 00:18:08.222 numjobs=1 00:18:08.222 00:18:08.222 [job0] 00:18:08.222 filename=/dev/nvme0n1 00:18:08.222 [job1] 00:18:08.222 filename=/dev/nvme0n2 00:18:08.222 [job2] 00:18:08.222 filename=/dev/nvme0n3 00:18:08.222 [job3] 00:18:08.222 filename=/dev/nvme0n4 00:18:08.222 Could not set queue depth (nvme0n1) 00:18:08.222 Could not set queue depth (nvme0n2) 00:18:08.222 Could not set queue depth (nvme0n3) 00:18:08.222 Could not set queue depth (nvme0n4) 00:18:08.222 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.222 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.222 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.222 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:08.222 fio-3.35 00:18:08.222 Starting 4 threads 00:18:11.568 16:00:50 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:11.568 16:00:50 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:11.568 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1753088, buflen=4096 00:18:11.568 fio: pid=2454454, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:11.568 16:00:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:11.568 16:00:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:11.568 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=290816, buflen=4096 00:18:11.568 fio: pid=2454448, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:11.568 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1208320, buflen=4096 00:18:11.568 fio: pid=2454433, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:11.568 16:00:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:11.568 16:00:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:11.829 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13266944, buflen=4096 00:18:11.829 fio: pid=2454441, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:11.829 00:18:11.829 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2454433: Fri Apr 26 16:00:51 2024 00:18:11.829 read: IOPS=95, BW=381KiB/s (390kB/s)(1180KiB/3096msec) 00:18:11.829 slat (usec): min=7, max=12129, avg=53.05, stdev=704.32 00:18:11.829 clat (usec): min=418, max=43902, avg=10349.55, stdev=17552.59 00:18:11.829 lat (usec): min=427, max=53376, avg=10402.75, stdev=17644.31 00:18:11.829 clat percentiles (usec): 00:18:11.829 | 1.00th=[ 469], 5.00th=[ 490], 10.00th=[ 494], 20.00th=[ 498], 00:18:11.829 | 30.00th=[ 498], 40.00th=[ 502], 50.00th=[ 553], 60.00th=[ 644], 00:18:11.829 | 70.00th=[ 799], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:18:11.829 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:18:11.829 | 99.99th=[43779] 00:18:11.829 bw ( KiB/s): min= 96, max= 160, per=2.48%, avg=120.00, stdev=29.93, samples=5 00:18:11.829 iops : min= 24, max= 40, avg=30.00, stdev= 7.48, samples=5 00:18:11.829 lat (usec) : 500=38.51%, 750=27.36%, 1000=8.78% 00:18:11.829 lat (msec) : 2=1.35%, 50=23.65% 00:18:11.829 cpu : usr=0.16%, sys=0.06%, ctx=301, majf=0, minf=1 00:18:11.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:11.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.829 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.829 issued rwts: total=296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:11.829 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2454441: Fri Apr 26 16:00:51 2024 00:18:11.829 read: IOPS=972, BW=3888KiB/s (3982kB/s)(12.7MiB/3332msec) 00:18:11.829 slat (usec): min=6, max=26663, avg=26.79, stdev=573.11 00:18:11.829 clat (usec): min=303, max=42899, avg=997.17, stdev=4193.47 00:18:11.829 lat (usec): min=310, max=42921, avg=1023.96, stdev=4275.46 00:18:11.829 clat percentiles (usec): 00:18:11.829 | 1.00th=[ 379], 5.00th=[ 441], 10.00th=[ 498], 20.00th=[ 523], 00:18:11.829 | 30.00th=[ 537], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 570], 00:18:11.829 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 676], 00:18:11.829 | 99.00th=[40633], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:11.829 | 99.99th=[42730] 00:18:11.829 bw ( KiB/s): min= 96, max= 6944, per=76.94%, avg=3725.17, stdev=3109.51, samples=6 00:18:11.829 iops : min= 24, max= 1736, avg=931.17, stdev=777.32, samples=6 00:18:11.829 lat (usec) : 500=11.70%, 750=84.60%, 1000=2.41% 00:18:11.829 lat (msec) : 2=0.15%, 20=0.03%, 50=1.08% 00:18:11.829 cpu : usr=0.36%, sys=1.62%, ctx=3245, majf=0, minf=1 00:18:11.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:11.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.829 issued rwts: total=3240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:11.829 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2454448: Fri Apr 26 16:00:51 2024 00:18:11.829 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(284KiB/2929msec) 00:18:11.829 slat (usec): min=13, max=3784, avg=71.72, stdev=443.70 00:18:11.829 clat (usec): min=974, max=42030, avg=40854.61, stdev=4825.19 00:18:11.829 lat (usec): min=1009, max=45017, avg=40927.00, stdev=4848.54 00:18:11.829 clat percentiles (usec): 00:18:11.829 | 1.00th=[ 971], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:11.829 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:18:11.829 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:11.829 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:11.829 | 99.99th=[42206] 00:18:11.829 bw ( KiB/s): min= 96, max= 104, per=2.00%, avg=97.60, stdev= 3.58, samples=5 00:18:11.829 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:18:11.829 lat (usec) : 1000=1.39% 00:18:11.829 lat (msec) : 50=97.22% 00:18:11.829 cpu : usr=0.00%, sys=0.10%, ctx=75, majf=0, minf=1 00:18:11.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:11.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.829 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.829 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:11.829 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2454454: Fri Apr 26 16:00:51 2024 00:18:11.829 read: IOPS=156, BW=625KiB/s (640kB/s)(1712KiB/2738msec) 00:18:11.829 slat (nsec): min=7362, max=39810, avg=10489.93, stdev=4941.07 00:18:11.829 clat (usec): min=349, max=43041, avg=6330.55, stdev=14450.27 00:18:11.829 lat (usec): min=357, max=43056, avg=6341.02, stdev=14454.13 00:18:11.829 clat percentiles (usec): 00:18:11.829 | 1.00th=[ 359], 5.00th=[ 375], 10.00th=[ 388], 20.00th=[ 408], 00:18:11.829 | 30.00th=[ 433], 40.00th=[ 494], 50.00th=[ 502], 60.00th=[ 537], 00:18:11.829 | 70.00th=[ 619], 80.00th=[ 668], 90.00th=[42206], 95.00th=[42206], 00:18:11.829 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:11.829 | 99.99th=[43254] 00:18:11.829 bw ( KiB/s): min= 88, max= 3008, per=13.94%, avg=675.20, stdev=1304.08, samples=5 00:18:11.829 iops : min= 22, max= 752, avg=168.80, stdev=326.02, samples=5 00:18:11.829 lat (usec) : 500=47.55%, 750=37.53%, 1000=0.70% 00:18:11.829 lat (msec) : 50=13.99% 00:18:11.829 cpu : usr=0.11%, sys=0.26%, ctx=430, majf=0, minf=2 00:18:11.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:11.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.829 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.829 issued rwts: total=429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:11.829 00:18:11.829 Run status group 0 (all jobs): 00:18:11.829 READ: bw=4842KiB/s (4958kB/s), 97.0KiB/s-3888KiB/s (99.3kB/s-3982kB/s), io=15.8MiB (16.5MB), run=2738-3332msec 00:18:11.829 00:18:11.829 Disk stats (read/write): 00:18:11.829 nvme0n1: ios=124/0, merge=0/0, ticks=3959/0, in_queue=3959, util=99.77% 00:18:11.829 nvme0n2: ios=2922/0, merge=0/0, ticks=2996/0, in_queue=2996, util=94.43% 00:18:11.829 nvme0n3: ios=119/0, merge=0/0, ticks=3964/0, in_queue=3964, util=100.00% 00:18:11.829 nvme0n4: ios=471/0, merge=0/0, ticks=3670/0, in_queue=3670, util=100.00% 00:18:11.829 16:00:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:11.829 16:00:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:12.087 16:00:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:12.087 16:00:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:12.344 16:00:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:12.344 16:00:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:12.602 16:00:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:12.602 16:00:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:12.859 16:00:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:12.860 16:00:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:13.117 16:00:52 -- target/fio.sh@69 -- # fio_status=0 00:18:13.117 16:00:52 -- target/fio.sh@70 -- # wait 2454290 00:18:13.117 16:00:52 -- target/fio.sh@70 -- # fio_status=4 00:18:13.117 16:00:52 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.048 16:00:53 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:14.048 16:00:53 -- common/autotest_common.sh@1205 -- # local i=0 00:18:14.048 16:00:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:14.048 16:00:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.048 16:00:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:14.048 16:00:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.048 16:00:53 -- common/autotest_common.sh@1217 -- # return 0 00:18:14.048 16:00:53 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:14.048 16:00:53 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:14.048 nvmf hotplug test: fio failed as expected 00:18:14.048 16:00:53 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.304 16:00:53 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:14.304 16:00:53 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:14.304 16:00:53 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:14.304 16:00:53 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:14.304 16:00:53 -- target/fio.sh@91 -- # nvmftestfini 00:18:14.305 16:00:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:14.305 16:00:53 -- nvmf/common.sh@117 -- # sync 00:18:14.305 16:00:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.305 16:00:53 -- nvmf/common.sh@120 -- # set +e 00:18:14.305 16:00:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.305 16:00:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.305 rmmod nvme_tcp 00:18:14.305 rmmod nvme_fabrics 00:18:14.305 rmmod nvme_keyring 00:18:14.305 16:00:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.305 16:00:53 -- nvmf/common.sh@124 -- # set -e 00:18:14.305 16:00:53 -- nvmf/common.sh@125 -- # return 0 00:18:14.305 16:00:53 -- nvmf/common.sh@478 -- # '[' -n 2451364 ']' 00:18:14.305 16:00:53 -- nvmf/common.sh@479 -- # killprocess 2451364 00:18:14.305 16:00:53 -- common/autotest_common.sh@936 -- # '[' -z 2451364 ']' 00:18:14.305 16:00:53 -- common/autotest_common.sh@940 -- # kill -0 2451364 00:18:14.561 16:00:53 -- common/autotest_common.sh@941 -- # uname 00:18:14.561 16:00:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.561 16:00:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2451364 00:18:14.561 16:00:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:14.561 16:00:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:14.561 16:00:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2451364' 00:18:14.561 killing process with pid 2451364 00:18:14.561 16:00:54 -- common/autotest_common.sh@955 -- # kill 2451364 00:18:14.561 16:00:54 -- common/autotest_common.sh@960 -- # wait 2451364 00:18:15.936 16:00:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:15.936 16:00:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:15.936 16:00:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:15.936 16:00:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.936 16:00:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.936 16:00:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.936 16:00:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.936 16:00:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.839 16:00:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:17.839 00:18:17.839 real 0m29.437s 00:18:17.839 user 1m56.598s 00:18:17.839 sys 0m7.572s 00:18:17.839 16:00:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:17.839 16:00:57 -- common/autotest_common.sh@10 -- # set +x 00:18:17.839 ************************************ 00:18:17.839 END TEST nvmf_fio_target 00:18:17.839 ************************************ 00:18:17.839 16:00:57 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:17.839 16:00:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:17.839 16:00:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:17.839 16:00:57 -- common/autotest_common.sh@10 -- # set +x 00:18:18.098 ************************************ 00:18:18.098 START TEST nvmf_bdevio 00:18:18.098 ************************************ 00:18:18.098 16:00:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:18.098 * Looking for test storage... 00:18:18.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.098 16:00:57 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.098 16:00:57 -- nvmf/common.sh@7 -- # uname -s 00:18:18.098 16:00:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.098 16:00:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.098 16:00:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.098 16:00:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.098 16:00:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.098 16:00:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.098 16:00:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.098 16:00:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.098 16:00:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.098 16:00:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.098 16:00:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.098 16:00:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.098 16:00:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.098 16:00:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.098 16:00:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.098 16:00:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.098 16:00:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.098 16:00:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.098 16:00:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.098 16:00:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.098 16:00:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.098 16:00:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.098 16:00:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.098 16:00:57 -- paths/export.sh@5 -- # export PATH 00:18:18.098 16:00:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.098 16:00:57 -- nvmf/common.sh@47 -- # : 0 00:18:18.098 16:00:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.098 16:00:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.098 16:00:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.098 16:00:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.098 16:00:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.098 16:00:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.098 16:00:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.098 16:00:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.098 16:00:57 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.098 16:00:57 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.098 16:00:57 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:18.098 16:00:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:18.098 16:00:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.098 16:00:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:18.098 16:00:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:18.098 16:00:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:18.098 16:00:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.098 16:00:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.098 16:00:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.098 16:00:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:18.098 16:00:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:18.098 16:00:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:18.098 16:00:57 -- common/autotest_common.sh@10 -- # set +x 00:18:23.371 16:01:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:23.371 16:01:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.371 16:01:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.371 16:01:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.371 16:01:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.371 16:01:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.371 16:01:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.371 16:01:02 -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.371 16:01:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.371 16:01:02 -- nvmf/common.sh@296 -- # e810=() 00:18:23.371 16:01:02 -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.371 16:01:02 -- nvmf/common.sh@297 -- # x722=() 00:18:23.371 16:01:02 -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.371 16:01:02 -- nvmf/common.sh@298 -- # mlx=() 00:18:23.371 16:01:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.371 16:01:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.371 16:01:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.371 16:01:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.371 16:01:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.371 16:01:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.371 16:01:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:23.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:23.371 16:01:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.371 16:01:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:23.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:23.371 16:01:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.371 16:01:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.371 16:01:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.371 16:01:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.371 16:01:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.371 16:01:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:23.371 Found net devices under 0000:86:00.0: cvl_0_0 00:18:23.371 16:01:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.371 16:01:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.371 16:01:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.371 16:01:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:23.371 16:01:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.371 16:01:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:23.371 Found net devices under 0000:86:00.1: cvl_0_1 00:18:23.371 16:01:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.371 16:01:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:23.371 16:01:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:23.371 16:01:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:23.371 16:01:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:23.371 16:01:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.371 16:01:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.371 16:01:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.371 16:01:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.371 16:01:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.371 16:01:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.371 16:01:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.371 16:01:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.371 16:01:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.372 16:01:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.372 16:01:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.372 16:01:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.372 16:01:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.372 16:01:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.372 16:01:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.372 16:01:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.372 16:01:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.629 16:01:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.629 16:01:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.629 16:01:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:18:23.629 00:18:23.629 --- 10.0.0.2 ping statistics --- 00:18:23.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.629 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:23.629 16:01:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:18:23.629 00:18:23.629 --- 10.0.0.1 ping statistics --- 00:18:23.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.629 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:18:23.629 16:01:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.629 16:01:03 -- nvmf/common.sh@411 -- # return 0 00:18:23.629 16:01:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:23.629 16:01:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.629 16:01:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:23.629 16:01:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:23.629 16:01:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.629 16:01:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:23.629 16:01:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:23.629 16:01:03 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:23.629 16:01:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:23.629 16:01:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:23.629 16:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:23.629 16:01:03 -- nvmf/common.sh@470 -- # nvmfpid=2459127 00:18:23.629 16:01:03 -- nvmf/common.sh@471 -- # waitforlisten 2459127 00:18:23.629 16:01:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:23.629 16:01:03 -- common/autotest_common.sh@817 -- # '[' -z 2459127 ']' 00:18:23.629 16:01:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.629 16:01:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.629 16:01:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.629 16:01:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.629 16:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:23.629 [2024-04-26 16:01:03.291247] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:23.629 [2024-04-26 16:01:03.291336] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.886 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.886 [2024-04-26 16:01:03.401750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:24.143 [2024-04-26 16:01:03.624943] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.143 [2024-04-26 16:01:03.624984] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.143 [2024-04-26 16:01:03.624994] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.143 [2024-04-26 16:01:03.625021] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.143 [2024-04-26 16:01:03.625029] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.143 [2024-04-26 16:01:03.625219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:24.143 [2024-04-26 16:01:03.625324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:24.143 [2024-04-26 16:01:03.625411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.143 [2024-04-26 16:01:03.625432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:24.401 16:01:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.401 16:01:04 -- common/autotest_common.sh@850 -- # return 0 00:18:24.401 16:01:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:24.401 16:01:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:24.401 16:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:24.658 16:01:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.658 16:01:04 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:24.658 16:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.658 16:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:24.658 [2024-04-26 16:01:04.115326] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.658 16:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.658 16:01:04 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:24.658 16:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.658 16:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:24.658 Malloc0 00:18:24.658 16:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.658 16:01:04 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:24.658 16:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.658 16:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:24.658 16:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.658 16:01:04 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:24.658 16:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.658 16:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:24.658 16:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.658 16:01:04 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.658 16:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.658 16:01:04 -- common/autotest_common.sh@10 -- # set +x 00:18:24.658 [2024-04-26 16:01:04.236306] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.658 16:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.658 16:01:04 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:24.658 16:01:04 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:24.658 16:01:04 -- nvmf/common.sh@521 -- # config=() 00:18:24.658 16:01:04 -- nvmf/common.sh@521 -- # local subsystem config 00:18:24.658 16:01:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:24.658 16:01:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:24.658 { 00:18:24.658 "params": { 00:18:24.658 "name": "Nvme$subsystem", 00:18:24.658 "trtype": "$TEST_TRANSPORT", 00:18:24.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.658 "adrfam": "ipv4", 00:18:24.658 "trsvcid": "$NVMF_PORT", 00:18:24.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.658 "hdgst": ${hdgst:-false}, 00:18:24.658 "ddgst": ${ddgst:-false} 00:18:24.658 }, 00:18:24.658 "method": "bdev_nvme_attach_controller" 00:18:24.658 } 00:18:24.658 EOF 00:18:24.658 )") 00:18:24.658 16:01:04 -- nvmf/common.sh@543 -- # cat 00:18:24.658 16:01:04 -- nvmf/common.sh@545 -- # jq . 00:18:24.658 16:01:04 -- nvmf/common.sh@546 -- # IFS=, 00:18:24.658 16:01:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:24.658 "params": { 00:18:24.658 "name": "Nvme1", 00:18:24.658 "trtype": "tcp", 00:18:24.658 "traddr": "10.0.0.2", 00:18:24.658 "adrfam": "ipv4", 00:18:24.658 "trsvcid": "4420", 00:18:24.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.658 "hdgst": false, 00:18:24.658 "ddgst": false 00:18:24.658 }, 00:18:24.658 "method": "bdev_nvme_attach_controller" 00:18:24.658 }' 00:18:24.658 [2024-04-26 16:01:04.296319] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:24.658 [2024-04-26 16:01:04.296402] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2459376 ] 00:18:24.915 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.915 [2024-04-26 16:01:04.403113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:25.172 [2024-04-26 16:01:04.643576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.172 [2024-04-26 16:01:04.643641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.172 [2024-04-26 16:01:04.643645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.737 I/O targets: 00:18:25.737 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:25.737 00:18:25.737 00:18:25.737 CUnit - A unit testing framework for C - Version 2.1-3 00:18:25.737 http://cunit.sourceforge.net/ 00:18:25.737 00:18:25.737 00:18:25.737 Suite: bdevio tests on: Nvme1n1 00:18:25.737 Test: blockdev write read block ...passed 00:18:25.737 Test: blockdev write zeroes read block ...passed 00:18:25.737 Test: blockdev write zeroes read no split ...passed 00:18:25.737 Test: blockdev write zeroes read split ...passed 00:18:25.994 Test: blockdev write zeroes read split partial ...passed 00:18:25.994 Test: blockdev reset ...[2024-04-26 16:01:05.467769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:25.994 [2024-04-26 16:01:05.467885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:18:25.994 [2024-04-26 16:01:05.491336] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:25.994 passed 00:18:25.994 Test: blockdev write read 8 blocks ...passed 00:18:25.994 Test: blockdev write read size > 128k ...passed 00:18:25.994 Test: blockdev write read invalid size ...passed 00:18:25.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:25.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:25.994 Test: blockdev write read max offset ...passed 00:18:25.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:25.994 Test: blockdev writev readv 8 blocks ...passed 00:18:26.251 Test: blockdev writev readv 30 x 1block ...passed 00:18:26.251 Test: blockdev writev readv block ...passed 00:18:26.251 Test: blockdev writev readv size > 128k ...passed 00:18:26.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:26.252 Test: blockdev comparev and writev ...[2024-04-26 16:01:05.766943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:26.252 [2024-04-26 16:01:05.766992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.767015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:26.252 [2024-04-26 16:01:05.767027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.767613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:26.252 [2024-04-26 16:01:05.767632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.767649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:26.252 [2024-04-26 16:01:05.767659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.768141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:26.252 [2024-04-26 16:01:05.768159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.768175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:26.252 [2024-04-26 16:01:05.768186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.768678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:26.252 [2024-04-26 16:01:05.768698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.768715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:26.252 [2024-04-26 16:01:05.768725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:26.252 passed 00:18:26.252 Test: blockdev nvme passthru rw ...passed 00:18:26.252 Test: blockdev nvme passthru vendor specific ...[2024-04-26 16:01:05.852858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:26.252 [2024-04-26 16:01:05.852890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.853213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:26.252 [2024-04-26 16:01:05.853229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.853560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:26.252 [2024-04-26 16:01:05.853576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:26.252 [2024-04-26 16:01:05.853888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:26.252 [2024-04-26 16:01:05.853902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:26.252 passed 00:18:26.252 Test: blockdev nvme admin passthru ...passed 00:18:26.252 Test: blockdev copy ...passed 00:18:26.252 00:18:26.252 Run Summary: Type Total Ran Passed Failed Inactive 00:18:26.252 suites 1 1 n/a 0 0 00:18:26.252 tests 23 23 23 0 0 00:18:26.252 asserts 152 152 152 0 n/a 00:18:26.252 00:18:26.252 Elapsed time = 1.553 seconds 00:18:27.622 16:01:06 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.622 16:01:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.622 16:01:06 -- common/autotest_common.sh@10 -- # set +x 00:18:27.622 16:01:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.622 16:01:06 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:27.622 16:01:06 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:27.622 16:01:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:27.622 16:01:06 -- nvmf/common.sh@117 -- # sync 00:18:27.622 16:01:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:27.622 16:01:06 -- nvmf/common.sh@120 -- # set +e 00:18:27.622 16:01:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.622 16:01:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:27.622 rmmod nvme_tcp 00:18:27.622 rmmod nvme_fabrics 00:18:27.622 rmmod nvme_keyring 00:18:27.622 16:01:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.622 16:01:06 -- nvmf/common.sh@124 -- # set -e 00:18:27.622 16:01:06 -- nvmf/common.sh@125 -- # return 0 00:18:27.622 16:01:06 -- nvmf/common.sh@478 -- # '[' -n 2459127 ']' 00:18:27.622 16:01:06 -- nvmf/common.sh@479 -- # killprocess 2459127 00:18:27.622 16:01:06 -- common/autotest_common.sh@936 -- # '[' -z 2459127 ']' 00:18:27.622 16:01:06 -- common/autotest_common.sh@940 -- # kill -0 2459127 00:18:27.622 16:01:06 -- common/autotest_common.sh@941 -- # uname 00:18:27.622 16:01:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.622 16:01:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2459127 00:18:27.622 16:01:07 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:18:27.622 16:01:07 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:18:27.622 16:01:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2459127' 00:18:27.622 killing process with pid 2459127 00:18:27.622 16:01:07 -- common/autotest_common.sh@955 -- # kill 2459127 00:18:27.622 16:01:07 -- common/autotest_common.sh@960 -- # wait 2459127 00:18:28.996 16:01:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:28.996 16:01:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:28.996 16:01:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:28.996 16:01:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.996 16:01:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.996 16:01:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.996 16:01:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.996 16:01:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.525 16:01:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.525 00:18:31.525 real 0m13.031s 00:18:31.525 user 0m24.827s 00:18:31.525 sys 0m4.866s 00:18:31.525 16:01:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:31.525 16:01:10 -- common/autotest_common.sh@10 -- # set +x 00:18:31.525 ************************************ 00:18:31.525 END TEST nvmf_bdevio 00:18:31.525 ************************************ 00:18:31.525 16:01:10 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:18:31.525 16:01:10 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:31.525 16:01:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:31.525 16:01:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.525 16:01:10 -- common/autotest_common.sh@10 -- # set +x 00:18:31.525 ************************************ 00:18:31.525 START TEST nvmf_bdevio_no_huge 00:18:31.525 ************************************ 00:18:31.525 16:01:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:31.525 * Looking for test storage... 00:18:31.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.525 16:01:10 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.525 16:01:10 -- nvmf/common.sh@7 -- # uname -s 00:18:31.525 16:01:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.525 16:01:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.525 16:01:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.525 16:01:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.525 16:01:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.525 16:01:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.525 16:01:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.525 16:01:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.525 16:01:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.525 16:01:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.525 16:01:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.525 16:01:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.525 16:01:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.525 16:01:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.525 16:01:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.526 16:01:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.526 16:01:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.526 16:01:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.526 16:01:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.526 16:01:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.526 16:01:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.526 16:01:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.526 16:01:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.526 16:01:10 -- paths/export.sh@5 -- # export PATH 00:18:31.526 16:01:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.526 16:01:10 -- nvmf/common.sh@47 -- # : 0 00:18:31.526 16:01:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.526 16:01:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.526 16:01:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.526 16:01:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.526 16:01:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.526 16:01:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.526 16:01:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.526 16:01:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.526 16:01:10 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.526 16:01:10 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.526 16:01:10 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:31.526 16:01:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:31.526 16:01:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.526 16:01:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:31.526 16:01:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:31.526 16:01:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:31.526 16:01:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.526 16:01:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.526 16:01:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.526 16:01:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:31.526 16:01:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:31.526 16:01:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.526 16:01:10 -- common/autotest_common.sh@10 -- # set +x 00:18:36.791 16:01:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:36.791 16:01:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:36.791 16:01:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:36.791 16:01:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:36.791 16:01:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:36.791 16:01:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:36.791 16:01:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:36.791 16:01:15 -- nvmf/common.sh@295 -- # net_devs=() 00:18:36.791 16:01:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:36.791 16:01:15 -- nvmf/common.sh@296 -- # e810=() 00:18:36.791 16:01:15 -- nvmf/common.sh@296 -- # local -ga e810 00:18:36.791 16:01:15 -- nvmf/common.sh@297 -- # x722=() 00:18:36.791 16:01:15 -- nvmf/common.sh@297 -- # local -ga x722 00:18:36.791 16:01:15 -- nvmf/common.sh@298 -- # mlx=() 00:18:36.791 16:01:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:36.791 16:01:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.791 16:01:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:36.791 16:01:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:36.791 16:01:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:36.791 16:01:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.791 16:01:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:36.791 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:36.791 16:01:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.791 16:01:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:36.791 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:36.791 16:01:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:36.791 16:01:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:36.791 16:01:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.791 16:01:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.791 16:01:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:36.791 16:01:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.791 16:01:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:36.791 Found net devices under 0000:86:00.0: cvl_0_0 00:18:36.791 16:01:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.791 16:01:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.791 16:01:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.791 16:01:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:36.791 16:01:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.791 16:01:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:36.791 Found net devices under 0000:86:00.1: cvl_0_1 00:18:36.791 16:01:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.791 16:01:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:36.791 16:01:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:36.791 16:01:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:36.792 16:01:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:36.792 16:01:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:36.792 16:01:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.792 16:01:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.792 16:01:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.792 16:01:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:36.792 16:01:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.792 16:01:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.792 16:01:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:36.792 16:01:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.792 16:01:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.792 16:01:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:36.792 16:01:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:36.792 16:01:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.792 16:01:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.792 16:01:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.792 16:01:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.792 16:01:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:36.792 16:01:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.792 16:01:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.792 16:01:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.792 16:01:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:36.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:18:36.792 00:18:36.792 --- 10.0.0.2 ping statistics --- 00:18:36.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.792 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:18:36.792 16:01:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:18:36.792 00:18:36.792 --- 10.0.0.1 ping statistics --- 00:18:36.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.792 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:18:36.792 16:01:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.792 16:01:15 -- nvmf/common.sh@411 -- # return 0 00:18:36.792 16:01:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:36.792 16:01:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.792 16:01:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:36.792 16:01:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:36.792 16:01:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.792 16:01:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:36.792 16:01:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:36.792 16:01:15 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:36.792 16:01:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:36.792 16:01:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:36.792 16:01:15 -- common/autotest_common.sh@10 -- # set +x 00:18:36.792 16:01:15 -- nvmf/common.sh@470 -- # nvmfpid=2463370 00:18:36.792 16:01:15 -- nvmf/common.sh@471 -- # waitforlisten 2463370 00:18:36.792 16:01:15 -- common/autotest_common.sh@817 -- # '[' -z 2463370 ']' 00:18:36.792 16:01:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.792 16:01:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:36.792 16:01:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.792 16:01:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:36.792 16:01:15 -- common/autotest_common.sh@10 -- # set +x 00:18:36.792 16:01:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:36.792 [2024-04-26 16:01:16.050736] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:36.792 [2024-04-26 16:01:16.050834] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:36.792 [2024-04-26 16:01:16.178102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.792 [2024-04-26 16:01:16.419773] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.792 [2024-04-26 16:01:16.419820] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.792 [2024-04-26 16:01:16.419830] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.792 [2024-04-26 16:01:16.419857] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.792 [2024-04-26 16:01:16.419865] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.792 [2024-04-26 16:01:16.420028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:36.792 [2024-04-26 16:01:16.420124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:36.792 [2024-04-26 16:01:16.420192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.792 [2024-04-26 16:01:16.420214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:37.357 16:01:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.357 16:01:16 -- common/autotest_common.sh@850 -- # return 0 00:18:37.357 16:01:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:37.357 16:01:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.357 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.357 16:01:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.357 16:01:16 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.357 16:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.357 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.357 [2024-04-26 16:01:16.877974] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.357 16:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.357 16:01:16 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:37.357 16:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.357 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.357 Malloc0 00:18:37.357 16:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.357 16:01:16 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:37.357 16:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.357 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.357 16:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.357 16:01:16 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.357 16:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.357 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.357 16:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.357 16:01:16 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.357 16:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.357 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.357 [2024-04-26 16:01:16.983325] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.357 16:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.357 16:01:16 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:37.357 16:01:16 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:37.357 16:01:16 -- nvmf/common.sh@521 -- # config=() 00:18:37.357 16:01:16 -- nvmf/common.sh@521 -- # local subsystem config 00:18:37.357 16:01:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:37.357 16:01:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:37.357 { 00:18:37.357 "params": { 00:18:37.357 "name": "Nvme$subsystem", 00:18:37.357 "trtype": "$TEST_TRANSPORT", 00:18:37.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.357 "adrfam": "ipv4", 00:18:37.357 "trsvcid": "$NVMF_PORT", 00:18:37.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.357 "hdgst": ${hdgst:-false}, 00:18:37.357 "ddgst": ${ddgst:-false} 00:18:37.357 }, 00:18:37.357 "method": "bdev_nvme_attach_controller" 00:18:37.357 } 00:18:37.357 EOF 00:18:37.357 )") 00:18:37.357 16:01:16 -- nvmf/common.sh@543 -- # cat 00:18:37.357 16:01:16 -- nvmf/common.sh@545 -- # jq . 00:18:37.357 16:01:16 -- nvmf/common.sh@546 -- # IFS=, 00:18:37.357 16:01:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:37.357 "params": { 00:18:37.357 "name": "Nvme1", 00:18:37.357 "trtype": "tcp", 00:18:37.357 "traddr": "10.0.0.2", 00:18:37.357 "adrfam": "ipv4", 00:18:37.357 "trsvcid": "4420", 00:18:37.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.357 "hdgst": false, 00:18:37.357 "ddgst": false 00:18:37.357 }, 00:18:37.357 "method": "bdev_nvme_attach_controller" 00:18:37.357 }' 00:18:37.615 [2024-04-26 16:01:17.057473] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:37.615 [2024-04-26 16:01:17.057550] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2463619 ] 00:18:37.615 [2024-04-26 16:01:17.173404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:37.873 [2024-04-26 16:01:17.415771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.873 [2024-04-26 16:01:17.415836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.873 [2024-04-26 16:01:17.415843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.437 I/O targets: 00:18:38.437 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:38.437 00:18:38.437 00:18:38.437 CUnit - A unit testing framework for C - Version 2.1-3 00:18:38.437 http://cunit.sourceforge.net/ 00:18:38.437 00:18:38.437 00:18:38.437 Suite: bdevio tests on: Nvme1n1 00:18:38.437 Test: blockdev write read block ...passed 00:18:38.437 Test: blockdev write zeroes read block ...passed 00:18:38.437 Test: blockdev write zeroes read no split ...passed 00:18:38.694 Test: blockdev write zeroes read split ...passed 00:18:38.694 Test: blockdev write zeroes read split partial ...passed 00:18:38.694 Test: blockdev reset ...[2024-04-26 16:01:18.280808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.694 [2024-04-26 16:01:18.280928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:18:38.951 [2024-04-26 16:01:18.388796] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.951 passed 00:18:38.951 Test: blockdev write read 8 blocks ...passed 00:18:38.951 Test: blockdev write read size > 128k ...passed 00:18:38.951 Test: blockdev write read invalid size ...passed 00:18:38.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:38.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:38.951 Test: blockdev write read max offset ...passed 00:18:38.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:38.951 Test: blockdev writev readv 8 blocks ...passed 00:18:38.951 Test: blockdev writev readv 30 x 1block ...passed 00:18:38.951 Test: blockdev writev readv block ...passed 00:18:38.951 Test: blockdev writev readv size > 128k ...passed 00:18:38.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:38.951 Test: blockdev comparev and writev ...[2024-04-26 16:01:18.577924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.951 [2024-04-26 16:01:18.577971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.951 [2024-04-26 16:01:18.577992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.951 [2024-04-26 16:01:18.578003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.951 [2024-04-26 16:01:18.578557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.951 [2024-04-26 16:01:18.578576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:38.951 [2024-04-26 16:01:18.578593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.951 [2024-04-26 16:01:18.578603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:38.951 [2024-04-26 16:01:18.579095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.951 [2024-04-26 16:01:18.579112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:38.951 [2024-04-26 16:01:18.579129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.951 [2024-04-26 16:01:18.579139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:38.951 [2024-04-26 16:01:18.579704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.951 [2024-04-26 16:01:18.579721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:38.951 [2024-04-26 16:01:18.579737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.951 [2024-04-26 16:01:18.579747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:38.951 passed 00:18:39.208 Test: blockdev nvme passthru rw ...passed 00:18:39.208 Test: blockdev nvme passthru vendor specific ...[2024-04-26 16:01:18.664973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.208 [2024-04-26 16:01:18.665005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:39.208 [2024-04-26 16:01:18.665340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.208 [2024-04-26 16:01:18.665356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:39.208 [2024-04-26 16:01:18.665687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.208 [2024-04-26 16:01:18.665702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:39.208 [2024-04-26 16:01:18.666016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.208 [2024-04-26 16:01:18.666030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:39.208 passed 00:18:39.208 Test: blockdev nvme admin passthru ...passed 00:18:39.208 Test: blockdev copy ...passed 00:18:39.208 00:18:39.208 Run Summary: Type Total Ran Passed Failed Inactive 00:18:39.208 suites 1 1 n/a 0 0 00:18:39.208 tests 23 23 23 0 0 00:18:39.208 asserts 152 152 152 0 n/a 00:18:39.208 00:18:39.208 Elapsed time = 1.475 seconds 00:18:39.772 16:01:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.772 16:01:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.773 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:18:39.773 16:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.773 16:01:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:39.773 16:01:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:39.773 16:01:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:39.773 16:01:19 -- nvmf/common.sh@117 -- # sync 00:18:39.773 16:01:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.773 16:01:19 -- nvmf/common.sh@120 -- # set +e 00:18:39.773 16:01:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.773 16:01:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.773 rmmod nvme_tcp 00:18:39.773 rmmod nvme_fabrics 00:18:39.773 rmmod nvme_keyring 00:18:39.773 16:01:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.773 16:01:19 -- nvmf/common.sh@124 -- # set -e 00:18:39.773 16:01:19 -- nvmf/common.sh@125 -- # return 0 00:18:39.773 16:01:19 -- nvmf/common.sh@478 -- # '[' -n 2463370 ']' 00:18:39.773 16:01:19 -- nvmf/common.sh@479 -- # killprocess 2463370 00:18:39.773 16:01:19 -- common/autotest_common.sh@936 -- # '[' -z 2463370 ']' 00:18:39.773 16:01:19 -- common/autotest_common.sh@940 -- # kill -0 2463370 00:18:39.773 16:01:19 -- common/autotest_common.sh@941 -- # uname 00:18:39.773 16:01:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.773 16:01:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2463370 00:18:40.030 16:01:19 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:18:40.030 16:01:19 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:18:40.030 16:01:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2463370' 00:18:40.030 killing process with pid 2463370 00:18:40.030 16:01:19 -- common/autotest_common.sh@955 -- # kill 2463370 00:18:40.030 16:01:19 -- common/autotest_common.sh@960 -- # wait 2463370 00:18:40.600 16:01:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:40.601 16:01:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:40.601 16:01:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:40.601 16:01:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.601 16:01:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.601 16:01:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.601 16:01:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.601 16:01:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.134 16:01:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:43.134 00:18:43.134 real 0m11.550s 00:18:43.134 user 0m20.531s 00:18:43.134 sys 0m4.945s 00:18:43.134 16:01:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:43.134 16:01:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.134 ************************************ 00:18:43.134 END TEST nvmf_bdevio_no_huge 00:18:43.134 ************************************ 00:18:43.134 16:01:22 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:43.134 16:01:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:43.134 16:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:43.134 16:01:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.134 ************************************ 00:18:43.134 START TEST nvmf_tls 00:18:43.134 ************************************ 00:18:43.134 16:01:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:43.134 * Looking for test storage... 00:18:43.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.134 16:01:22 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.134 16:01:22 -- nvmf/common.sh@7 -- # uname -s 00:18:43.134 16:01:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.134 16:01:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.134 16:01:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.134 16:01:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.134 16:01:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.134 16:01:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.134 16:01:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.134 16:01:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.134 16:01:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.134 16:01:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.134 16:01:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.134 16:01:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.134 16:01:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.134 16:01:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.134 16:01:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.134 16:01:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.134 16:01:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.134 16:01:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.134 16:01:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.134 16:01:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.134 16:01:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.134 16:01:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.134 16:01:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.134 16:01:22 -- paths/export.sh@5 -- # export PATH 00:18:43.134 16:01:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.134 16:01:22 -- nvmf/common.sh@47 -- # : 0 00:18:43.134 16:01:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.134 16:01:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.134 16:01:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.134 16:01:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.134 16:01:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.134 16:01:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.134 16:01:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.134 16:01:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.134 16:01:22 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.134 16:01:22 -- target/tls.sh@62 -- # nvmftestinit 00:18:43.134 16:01:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:43.135 16:01:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.135 16:01:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:43.135 16:01:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:43.135 16:01:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:43.135 16:01:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.135 16:01:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.135 16:01:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.135 16:01:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:43.135 16:01:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:43.135 16:01:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:43.135 16:01:22 -- common/autotest_common.sh@10 -- # set +x 00:18:48.402 16:01:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:48.402 16:01:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.402 16:01:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.402 16:01:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.402 16:01:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.402 16:01:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.402 16:01:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.402 16:01:27 -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.402 16:01:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.402 16:01:27 -- nvmf/common.sh@296 -- # e810=() 00:18:48.402 16:01:27 -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.402 16:01:27 -- nvmf/common.sh@297 -- # x722=() 00:18:48.402 16:01:27 -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.402 16:01:27 -- nvmf/common.sh@298 -- # mlx=() 00:18:48.402 16:01:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.402 16:01:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.402 16:01:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.402 16:01:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.402 16:01:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.402 16:01:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.402 16:01:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:48.402 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:48.402 16:01:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.402 16:01:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:48.402 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:48.402 16:01:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.402 16:01:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.402 16:01:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.402 16:01:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.402 16:01:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:48.402 16:01:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.402 16:01:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:48.402 Found net devices under 0000:86:00.0: cvl_0_0 00:18:48.403 16:01:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.403 16:01:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.403 16:01:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.403 16:01:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:48.403 16:01:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.403 16:01:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:48.403 Found net devices under 0000:86:00.1: cvl_0_1 00:18:48.403 16:01:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.403 16:01:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:48.403 16:01:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:48.403 16:01:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:48.403 16:01:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:48.403 16:01:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:48.403 16:01:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.403 16:01:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.403 16:01:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.403 16:01:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:48.403 16:01:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.403 16:01:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.403 16:01:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:48.403 16:01:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.403 16:01:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.403 16:01:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:48.403 16:01:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:48.403 16:01:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.403 16:01:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.403 16:01:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.403 16:01:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.403 16:01:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:48.403 16:01:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.403 16:01:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.403 16:01:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.403 16:01:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:48.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:18:48.403 00:18:48.403 --- 10.0.0.2 ping statistics --- 00:18:48.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.403 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:48.403 16:01:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:18:48.403 00:18:48.403 --- 10.0.0.1 ping statistics --- 00:18:48.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.403 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:18:48.403 16:01:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.403 16:01:27 -- nvmf/common.sh@411 -- # return 0 00:18:48.403 16:01:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:48.403 16:01:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.403 16:01:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:48.403 16:01:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:48.403 16:01:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.403 16:01:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:48.403 16:01:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:48.403 16:01:27 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:48.403 16:01:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:48.403 16:01:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:48.403 16:01:27 -- common/autotest_common.sh@10 -- # set +x 00:18:48.403 16:01:28 -- nvmf/common.sh@470 -- # nvmfpid=2467599 00:18:48.403 16:01:28 -- nvmf/common.sh@471 -- # waitforlisten 2467599 00:18:48.403 16:01:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:48.403 16:01:28 -- common/autotest_common.sh@817 -- # '[' -z 2467599 ']' 00:18:48.403 16:01:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.403 16:01:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:48.403 16:01:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.403 16:01:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:48.403 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:18:48.661 [2024-04-26 16:01:28.084931] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:48.661 [2024-04-26 16:01:28.085022] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.661 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.661 [2024-04-26 16:01:28.195834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.920 [2024-04-26 16:01:28.420139] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.920 [2024-04-26 16:01:28.420184] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.920 [2024-04-26 16:01:28.420198] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.920 [2024-04-26 16:01:28.420211] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.920 [2024-04-26 16:01:28.420224] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.920 [2024-04-26 16:01:28.420265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.179 16:01:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:49.179 16:01:28 -- common/autotest_common.sh@850 -- # return 0 00:18:49.179 16:01:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:49.179 16:01:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:49.179 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:18:49.437 16:01:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.437 16:01:28 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:49.437 16:01:28 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:49.437 true 00:18:49.437 16:01:29 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.437 16:01:29 -- target/tls.sh@73 -- # jq -r .tls_version 00:18:49.696 16:01:29 -- target/tls.sh@73 -- # version=0 00:18:49.696 16:01:29 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:49.696 16:01:29 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:49.955 16:01:29 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.955 16:01:29 -- target/tls.sh@81 -- # jq -r .tls_version 00:18:49.955 16:01:29 -- target/tls.sh@81 -- # version=13 00:18:49.955 16:01:29 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:49.955 16:01:29 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:50.214 16:01:29 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:50.214 16:01:29 -- target/tls.sh@89 -- # jq -r .tls_version 00:18:50.473 16:01:29 -- target/tls.sh@89 -- # version=7 00:18:50.473 16:01:29 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:50.473 16:01:29 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:50.473 16:01:29 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:50.473 16:01:30 -- target/tls.sh@96 -- # ktls=false 00:18:50.473 16:01:30 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:50.473 16:01:30 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:50.731 16:01:30 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:50.731 16:01:30 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:50.992 16:01:30 -- target/tls.sh@104 -- # ktls=true 00:18:50.992 16:01:30 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:50.992 16:01:30 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:50.992 16:01:30 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:50.992 16:01:30 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:51.284 16:01:30 -- target/tls.sh@112 -- # ktls=false 00:18:51.284 16:01:30 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:51.284 16:01:30 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:51.284 16:01:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:51.284 16:01:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:51.284 16:01:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:51.284 16:01:30 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:18:51.284 16:01:30 -- nvmf/common.sh@693 -- # digest=1 00:18:51.284 16:01:30 -- nvmf/common.sh@694 -- # python - 00:18:51.284 16:01:30 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:51.284 16:01:30 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:51.284 16:01:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:51.284 16:01:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:51.284 16:01:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:51.284 16:01:30 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:18:51.284 16:01:30 -- nvmf/common.sh@693 -- # digest=1 00:18:51.284 16:01:30 -- nvmf/common.sh@694 -- # python - 00:18:51.284 16:01:30 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:51.284 16:01:30 -- target/tls.sh@121 -- # mktemp 00:18:51.284 16:01:30 -- target/tls.sh@121 -- # key_path=/tmp/tmp.aqs3MaGWiS 00:18:51.284 16:01:30 -- target/tls.sh@122 -- # mktemp 00:18:51.284 16:01:30 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Hwunwy2kmR 00:18:51.284 16:01:30 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:51.284 16:01:30 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:51.284 16:01:30 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.aqs3MaGWiS 00:18:51.284 16:01:30 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Hwunwy2kmR 00:18:51.284 16:01:30 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:51.586 16:01:31 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:52.154 16:01:31 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.aqs3MaGWiS 00:18:52.154 16:01:31 -- target/tls.sh@49 -- # local key=/tmp/tmp.aqs3MaGWiS 00:18:52.154 16:01:31 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:52.154 [2024-04-26 16:01:31.744259] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.154 16:01:31 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:52.413 16:01:31 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:52.413 [2024-04-26 16:01:32.089183] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:52.413 [2024-04-26 16:01:32.089436] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.671 16:01:32 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:52.671 malloc0 00:18:52.671 16:01:32 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:52.930 16:01:32 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aqs3MaGWiS 00:18:53.189 [2024-04-26 16:01:32.642486] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:53.189 16:01:32 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aqs3MaGWiS 00:18:53.189 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.173 Initializing NVMe Controllers 00:19:03.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:03.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:03.173 Initialization complete. Launching workers. 00:19:03.173 ======================================================== 00:19:03.174 Latency(us) 00:19:03.174 Device Information : IOPS MiB/s Average min max 00:19:03.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12609.15 49.25 5076.48 1376.09 6558.40 00:19:03.174 ======================================================== 00:19:03.174 Total : 12609.15 49.25 5076.48 1376.09 6558.40 00:19:03.174 00:19:03.432 16:01:42 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aqs3MaGWiS 00:19:03.432 16:01:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:03.432 16:01:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:03.432 16:01:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:03.432 16:01:42 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aqs3MaGWiS' 00:19:03.432 16:01:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.432 16:01:42 -- target/tls.sh@28 -- # bdevperf_pid=2469956 00:19:03.432 16:01:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:03.432 16:01:42 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:03.432 16:01:42 -- target/tls.sh@31 -- # waitforlisten 2469956 /var/tmp/bdevperf.sock 00:19:03.432 16:01:42 -- common/autotest_common.sh@817 -- # '[' -z 2469956 ']' 00:19:03.432 16:01:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.432 16:01:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:03.432 16:01:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.432 16:01:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:03.432 16:01:42 -- common/autotest_common.sh@10 -- # set +x 00:19:03.433 [2024-04-26 16:01:42.936324] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:03.433 [2024-04-26 16:01:42.936415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469956 ] 00:19:03.433 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.433 [2024-04-26 16:01:43.035296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.703 [2024-04-26 16:01:43.263387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.274 16:01:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.274 16:01:43 -- common/autotest_common.sh@850 -- # return 0 00:19:04.274 16:01:43 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aqs3MaGWiS 00:19:04.274 [2024-04-26 16:01:43.864298] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.274 [2024-04-26 16:01:43.864407] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:04.533 TLSTESTn1 00:19:04.533 16:01:43 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:04.533 Running I/O for 10 seconds... 00:19:14.510 00:19:14.510 Latency(us) 00:19:14.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.510 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.510 Verification LBA range: start 0x0 length 0x2000 00:19:14.510 TLSTESTn1 : 10.07 1788.41 6.99 0.00 0.00 71324.80 8263.23 124917.31 00:19:14.510 =================================================================================================================== 00:19:14.510 Total : 1788.41 6.99 0.00 0.00 71324.80 8263.23 124917.31 00:19:14.510 0 00:19:14.510 16:01:54 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.510 16:01:54 -- target/tls.sh@45 -- # killprocess 2469956 00:19:14.510 16:01:54 -- common/autotest_common.sh@936 -- # '[' -z 2469956 ']' 00:19:14.510 16:01:54 -- common/autotest_common.sh@940 -- # kill -0 2469956 00:19:14.769 16:01:54 -- common/autotest_common.sh@941 -- # uname 00:19:14.769 16:01:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:14.769 16:01:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2469956 00:19:14.769 16:01:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:14.769 16:01:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:14.769 16:01:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2469956' 00:19:14.769 killing process with pid 2469956 00:19:14.769 16:01:54 -- common/autotest_common.sh@955 -- # kill 2469956 00:19:14.769 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.769 00:19:14.769 Latency(us) 00:19:14.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.769 =================================================================================================================== 00:19:14.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.769 [2024-04-26 16:01:54.238648] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.769 16:01:54 -- common/autotest_common.sh@960 -- # wait 2469956 00:19:15.705 16:01:55 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwunwy2kmR 00:19:15.705 16:01:55 -- common/autotest_common.sh@638 -- # local es=0 00:19:15.705 16:01:55 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwunwy2kmR 00:19:15.705 16:01:55 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:15.705 16:01:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.705 16:01:55 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:15.705 16:01:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.705 16:01:55 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwunwy2kmR 00:19:15.705 16:01:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.705 16:01:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.705 16:01:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.705 16:01:55 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Hwunwy2kmR' 00:19:15.705 16:01:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.705 16:01:55 -- target/tls.sh@28 -- # bdevperf_pid=2472025 00:19:15.705 16:01:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.705 16:01:55 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.705 16:01:55 -- target/tls.sh@31 -- # waitforlisten 2472025 /var/tmp/bdevperf.sock 00:19:15.705 16:01:55 -- common/autotest_common.sh@817 -- # '[' -z 2472025 ']' 00:19:15.705 16:01:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.705 16:01:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:15.705 16:01:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.705 16:01:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:15.705 16:01:55 -- common/autotest_common.sh@10 -- # set +x 00:19:15.705 [2024-04-26 16:01:55.345867] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:15.705 [2024-04-26 16:01:55.345956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472025 ] 00:19:15.964 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.964 [2024-04-26 16:01:55.458174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.223 [2024-04-26 16:01:55.681390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.482 16:01:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:16.482 16:01:56 -- common/autotest_common.sh@850 -- # return 0 00:19:16.482 16:01:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwunwy2kmR 00:19:16.740 [2024-04-26 16:01:56.270868] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.740 [2024-04-26 16:01:56.270984] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.740 [2024-04-26 16:01:56.282392] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:16.740 [2024-04-26 16:01:56.282517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:19:16.740 [2024-04-26 16:01:56.283415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:19:16.740 [2024-04-26 16:01:56.284417] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.740 [2024-04-26 16:01:56.284437] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:16.740 [2024-04-26 16:01:56.284453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.740 request: 00:19:16.740 { 00:19:16.740 "name": "TLSTEST", 00:19:16.740 "trtype": "tcp", 00:19:16.740 "traddr": "10.0.0.2", 00:19:16.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.740 "adrfam": "ipv4", 00:19:16.740 "trsvcid": "4420", 00:19:16.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.740 "psk": "/tmp/tmp.Hwunwy2kmR", 00:19:16.740 "method": "bdev_nvme_attach_controller", 00:19:16.740 "req_id": 1 00:19:16.740 } 00:19:16.740 Got JSON-RPC error response 00:19:16.740 response: 00:19:16.740 { 00:19:16.740 "code": -32602, 00:19:16.740 "message": "Invalid parameters" 00:19:16.740 } 00:19:16.740 16:01:56 -- target/tls.sh@36 -- # killprocess 2472025 00:19:16.740 16:01:56 -- common/autotest_common.sh@936 -- # '[' -z 2472025 ']' 00:19:16.740 16:01:56 -- common/autotest_common.sh@940 -- # kill -0 2472025 00:19:16.740 16:01:56 -- common/autotest_common.sh@941 -- # uname 00:19:16.740 16:01:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.740 16:01:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2472025 00:19:16.740 16:01:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:16.740 16:01:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:16.740 16:01:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2472025' 00:19:16.740 killing process with pid 2472025 00:19:16.740 16:01:56 -- common/autotest_common.sh@955 -- # kill 2472025 00:19:16.740 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.740 00:19:16.740 Latency(us) 00:19:16.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.740 =================================================================================================================== 00:19:16.740 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.740 [2024-04-26 16:01:56.346474] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.740 16:01:56 -- common/autotest_common.sh@960 -- # wait 2472025 00:19:18.118 16:01:57 -- target/tls.sh@37 -- # return 1 00:19:18.118 16:01:57 -- common/autotest_common.sh@641 -- # es=1 00:19:18.118 16:01:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:18.118 16:01:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:18.118 16:01:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:18.118 16:01:57 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aqs3MaGWiS 00:19:18.118 16:01:57 -- common/autotest_common.sh@638 -- # local es=0 00:19:18.118 16:01:57 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aqs3MaGWiS 00:19:18.118 16:01:57 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:18.118 16:01:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.118 16:01:57 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:18.118 16:01:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:18.118 16:01:57 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aqs3MaGWiS 00:19:18.118 16:01:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.118 16:01:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.118 16:01:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:18.118 16:01:57 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aqs3MaGWiS' 00:19:18.118 16:01:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.118 16:01:57 -- target/tls.sh@28 -- # bdevperf_pid=2472391 00:19:18.118 16:01:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.118 16:01:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.118 16:01:57 -- target/tls.sh@31 -- # waitforlisten 2472391 /var/tmp/bdevperf.sock 00:19:18.118 16:01:57 -- common/autotest_common.sh@817 -- # '[' -z 2472391 ']' 00:19:18.118 16:01:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.118 16:01:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:18.118 16:01:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.118 16:01:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:18.118 16:01:57 -- common/autotest_common.sh@10 -- # set +x 00:19:18.118 [2024-04-26 16:01:57.443509] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:18.118 [2024-04-26 16:01:57.443604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472391 ] 00:19:18.118 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.118 [2024-04-26 16:01:57.545833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.118 [2024-04-26 16:01:57.768769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.685 16:01:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:18.686 16:01:58 -- common/autotest_common.sh@850 -- # return 0 00:19:18.686 16:01:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.aqs3MaGWiS 00:19:18.945 [2024-04-26 16:01:58.371307] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.945 [2024-04-26 16:01:58.371429] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:18.945 [2024-04-26 16:01:58.379619] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:18.945 [2024-04-26 16:01:58.379650] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:18.945 [2024-04-26 16:01:58.379703] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:18.945 [2024-04-26 16:01:58.380967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:19:18.945 [2024-04-26 16:01:58.381942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:19:18.945 [2024-04-26 16:01:58.382944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.945 [2024-04-26 16:01:58.382965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:18.945 [2024-04-26 16:01:58.382978] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.945 request: 00:19:18.945 { 00:19:18.945 "name": "TLSTEST", 00:19:18.945 "trtype": "tcp", 00:19:18.945 "traddr": "10.0.0.2", 00:19:18.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:18.945 "adrfam": "ipv4", 00:19:18.945 "trsvcid": "4420", 00:19:18.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.945 "psk": "/tmp/tmp.aqs3MaGWiS", 00:19:18.945 "method": "bdev_nvme_attach_controller", 00:19:18.945 "req_id": 1 00:19:18.945 } 00:19:18.945 Got JSON-RPC error response 00:19:18.945 response: 00:19:18.945 { 00:19:18.945 "code": -32602, 00:19:18.945 "message": "Invalid parameters" 00:19:18.945 } 00:19:18.945 16:01:58 -- target/tls.sh@36 -- # killprocess 2472391 00:19:18.945 16:01:58 -- common/autotest_common.sh@936 -- # '[' -z 2472391 ']' 00:19:18.945 16:01:58 -- common/autotest_common.sh@940 -- # kill -0 2472391 00:19:18.945 16:01:58 -- common/autotest_common.sh@941 -- # uname 00:19:18.945 16:01:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.945 16:01:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2472391 00:19:18.945 16:01:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:18.945 16:01:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:18.945 16:01:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2472391' 00:19:18.945 killing process with pid 2472391 00:19:18.945 16:01:58 -- common/autotest_common.sh@955 -- # kill 2472391 00:19:18.945 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.945 00:19:18.945 Latency(us) 00:19:18.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.945 =================================================================================================================== 00:19:18.945 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.945 [2024-04-26 16:01:58.453558] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:18.945 16:01:58 -- common/autotest_common.sh@960 -- # wait 2472391 00:19:19.882 16:01:59 -- target/tls.sh@37 -- # return 1 00:19:19.882 16:01:59 -- common/autotest_common.sh@641 -- # es=1 00:19:19.882 16:01:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.882 16:01:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.882 16:01:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.882 16:01:59 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aqs3MaGWiS 00:19:19.882 16:01:59 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.882 16:01:59 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aqs3MaGWiS 00:19:19.882 16:01:59 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:19.882 16:01:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.882 16:01:59 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:19.882 16:01:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.882 16:01:59 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aqs3MaGWiS 00:19:19.882 16:01:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.882 16:01:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:19.882 16:01:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.882 16:01:59 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aqs3MaGWiS' 00:19:19.882 16:01:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.882 16:01:59 -- target/tls.sh@28 -- # bdevperf_pid=2472730 00:19:19.882 16:01:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.882 16:01:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.882 16:01:59 -- target/tls.sh@31 -- # waitforlisten 2472730 /var/tmp/bdevperf.sock 00:19:19.882 16:01:59 -- common/autotest_common.sh@817 -- # '[' -z 2472730 ']' 00:19:19.882 16:01:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.882 16:01:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:19.882 16:01:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.882 16:01:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:19.882 16:01:59 -- common/autotest_common.sh@10 -- # set +x 00:19:19.882 [2024-04-26 16:01:59.547927] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:19.882 [2024-04-26 16:01:59.548022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472730 ] 00:19:20.140 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.140 [2024-04-26 16:01:59.649437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.399 [2024-04-26 16:01:59.871229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.658 16:02:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:20.658 16:02:00 -- common/autotest_common.sh@850 -- # return 0 00:19:20.658 16:02:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aqs3MaGWiS 00:19:20.917 [2024-04-26 16:02:00.486196] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.917 [2024-04-26 16:02:00.486320] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:20.917 [2024-04-26 16:02:00.494582] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:20.917 [2024-04-26 16:02:00.494615] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:20.917 [2024-04-26 16:02:00.494662] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:20.917 [2024-04-26 16:02:00.494933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:19:20.917 [2024-04-26 16:02:00.495905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:19:20.917 [2024-04-26 16:02:00.496901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:20.917 [2024-04-26 16:02:00.496922] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:20.917 [2024-04-26 16:02:00.496935] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:20.917 request: 00:19:20.917 { 00:19:20.917 "name": "TLSTEST", 00:19:20.917 "trtype": "tcp", 00:19:20.917 "traddr": "10.0.0.2", 00:19:20.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.917 "adrfam": "ipv4", 00:19:20.917 "trsvcid": "4420", 00:19:20.917 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:20.917 "psk": "/tmp/tmp.aqs3MaGWiS", 00:19:20.917 "method": "bdev_nvme_attach_controller", 00:19:20.917 "req_id": 1 00:19:20.917 } 00:19:20.917 Got JSON-RPC error response 00:19:20.917 response: 00:19:20.917 { 00:19:20.917 "code": -32602, 00:19:20.917 "message": "Invalid parameters" 00:19:20.917 } 00:19:20.917 16:02:00 -- target/tls.sh@36 -- # killprocess 2472730 00:19:20.917 16:02:00 -- common/autotest_common.sh@936 -- # '[' -z 2472730 ']' 00:19:20.917 16:02:00 -- common/autotest_common.sh@940 -- # kill -0 2472730 00:19:20.917 16:02:00 -- common/autotest_common.sh@941 -- # uname 00:19:20.917 16:02:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:20.917 16:02:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2472730 00:19:20.917 16:02:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:20.917 16:02:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:20.917 16:02:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2472730' 00:19:20.917 killing process with pid 2472730 00:19:20.917 16:02:00 -- common/autotest_common.sh@955 -- # kill 2472730 00:19:20.917 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.917 00:19:20.917 Latency(us) 00:19:20.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.917 =================================================================================================================== 00:19:20.917 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.917 [2024-04-26 16:02:00.562014] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:20.917 16:02:00 -- common/autotest_common.sh@960 -- # wait 2472730 00:19:22.298 16:02:01 -- target/tls.sh@37 -- # return 1 00:19:22.298 16:02:01 -- common/autotest_common.sh@641 -- # es=1 00:19:22.298 16:02:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:22.298 16:02:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:22.298 16:02:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:22.298 16:02:01 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:22.298 16:02:01 -- common/autotest_common.sh@638 -- # local es=0 00:19:22.298 16:02:01 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:22.298 16:02:01 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:22.298 16:02:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:22.298 16:02:01 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:22.298 16:02:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:22.298 16:02:01 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:22.298 16:02:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:22.298 16:02:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:22.298 16:02:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:22.298 16:02:01 -- target/tls.sh@23 -- # psk= 00:19:22.298 16:02:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.298 16:02:01 -- target/tls.sh@28 -- # bdevperf_pid=2473085 00:19:22.298 16:02:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.298 16:02:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.298 16:02:01 -- target/tls.sh@31 -- # waitforlisten 2473085 /var/tmp/bdevperf.sock 00:19:22.298 16:02:01 -- common/autotest_common.sh@817 -- # '[' -z 2473085 ']' 00:19:22.298 16:02:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.298 16:02:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.298 16:02:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.298 16:02:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.298 16:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:22.298 [2024-04-26 16:02:01.646366] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:22.298 [2024-04-26 16:02:01.646459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473085 ] 00:19:22.298 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.298 [2024-04-26 16:02:01.749182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.298 [2024-04-26 16:02:01.978003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.866 16:02:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.866 16:02:02 -- common/autotest_common.sh@850 -- # return 0 00:19:22.866 16:02:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:23.126 [2024-04-26 16:02:02.578406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:23.126 [2024-04-26 16:02:02.579834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:19:23.126 [2024-04-26 16:02:02.580826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.126 [2024-04-26 16:02:02.580848] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:23.126 [2024-04-26 16:02:02.580860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.126 request: 00:19:23.126 { 00:19:23.126 "name": "TLSTEST", 00:19:23.126 "trtype": "tcp", 00:19:23.126 "traddr": "10.0.0.2", 00:19:23.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.126 "adrfam": "ipv4", 00:19:23.126 "trsvcid": "4420", 00:19:23.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.126 "method": "bdev_nvme_attach_controller", 00:19:23.126 "req_id": 1 00:19:23.126 } 00:19:23.126 Got JSON-RPC error response 00:19:23.126 response: 00:19:23.126 { 00:19:23.126 "code": -32602, 00:19:23.126 "message": "Invalid parameters" 00:19:23.126 } 00:19:23.126 16:02:02 -- target/tls.sh@36 -- # killprocess 2473085 00:19:23.126 16:02:02 -- common/autotest_common.sh@936 -- # '[' -z 2473085 ']' 00:19:23.126 16:02:02 -- common/autotest_common.sh@940 -- # kill -0 2473085 00:19:23.126 16:02:02 -- common/autotest_common.sh@941 -- # uname 00:19:23.126 16:02:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.126 16:02:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2473085 00:19:23.126 16:02:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:23.126 16:02:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:23.126 16:02:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2473085' 00:19:23.126 killing process with pid 2473085 00:19:23.126 16:02:02 -- common/autotest_common.sh@955 -- # kill 2473085 00:19:23.126 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.126 00:19:23.126 Latency(us) 00:19:23.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.126 =================================================================================================================== 00:19:23.126 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.126 16:02:02 -- common/autotest_common.sh@960 -- # wait 2473085 00:19:24.066 16:02:03 -- target/tls.sh@37 -- # return 1 00:19:24.066 16:02:03 -- common/autotest_common.sh@641 -- # es=1 00:19:24.066 16:02:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:24.066 16:02:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:24.066 16:02:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:24.066 16:02:03 -- target/tls.sh@158 -- # killprocess 2467599 00:19:24.066 16:02:03 -- common/autotest_common.sh@936 -- # '[' -z 2467599 ']' 00:19:24.066 16:02:03 -- common/autotest_common.sh@940 -- # kill -0 2467599 00:19:24.066 16:02:03 -- common/autotest_common.sh@941 -- # uname 00:19:24.066 16:02:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:24.066 16:02:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2467599 00:19:24.066 16:02:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:24.066 16:02:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:24.066 16:02:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2467599' 00:19:24.066 killing process with pid 2467599 00:19:24.066 16:02:03 -- common/autotest_common.sh@955 -- # kill 2467599 00:19:24.066 [2024-04-26 16:02:03.695896] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:24.066 16:02:03 -- common/autotest_common.sh@960 -- # wait 2467599 00:19:25.445 16:02:05 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:25.445 16:02:05 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:25.445 16:02:05 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:25.445 16:02:05 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:25.445 16:02:05 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:25.445 16:02:05 -- nvmf/common.sh@693 -- # digest=2 00:19:25.445 16:02:05 -- nvmf/common.sh@694 -- # python - 00:19:25.705 16:02:05 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:25.705 16:02:05 -- target/tls.sh@160 -- # mktemp 00:19:25.705 16:02:05 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.7lTupJrdXS 00:19:25.705 16:02:05 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:25.705 16:02:05 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.7lTupJrdXS 00:19:25.705 16:02:05 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:25.705 16:02:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:25.705 16:02:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:25.705 16:02:05 -- common/autotest_common.sh@10 -- # set +x 00:19:25.705 16:02:05 -- nvmf/common.sh@470 -- # nvmfpid=2473675 00:19:25.705 16:02:05 -- nvmf/common.sh@471 -- # waitforlisten 2473675 00:19:25.705 16:02:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:25.705 16:02:05 -- common/autotest_common.sh@817 -- # '[' -z 2473675 ']' 00:19:25.705 16:02:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.705 16:02:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:25.705 16:02:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.705 16:02:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:25.705 16:02:05 -- common/autotest_common.sh@10 -- # set +x 00:19:25.705 [2024-04-26 16:02:05.221931] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:25.705 [2024-04-26 16:02:05.222016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.705 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.705 [2024-04-26 16:02:05.327671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.965 [2024-04-26 16:02:05.543879] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.965 [2024-04-26 16:02:05.543929] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.965 [2024-04-26 16:02:05.543943] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.965 [2024-04-26 16:02:05.543955] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.965 [2024-04-26 16:02:05.543967] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.965 [2024-04-26 16:02:05.544011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.534 16:02:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:26.534 16:02:05 -- common/autotest_common.sh@850 -- # return 0 00:19:26.534 16:02:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:26.534 16:02:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:26.534 16:02:05 -- common/autotest_common.sh@10 -- # set +x 00:19:26.534 16:02:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.534 16:02:06 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.7lTupJrdXS 00:19:26.534 16:02:06 -- target/tls.sh@49 -- # local key=/tmp/tmp.7lTupJrdXS 00:19:26.534 16:02:06 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:26.534 [2024-04-26 16:02:06.180817] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.534 16:02:06 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:26.794 16:02:06 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.052 [2024-04-26 16:02:06.513704] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.052 [2024-04-26 16:02:06.513954] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.052 16:02:06 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.311 malloc0 00:19:27.311 16:02:06 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.311 16:02:06 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7lTupJrdXS 00:19:27.570 [2024-04-26 16:02:07.053431] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:27.570 16:02:07 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7lTupJrdXS 00:19:27.570 16:02:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.570 16:02:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.570 16:02:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.570 16:02:07 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7lTupJrdXS' 00:19:27.570 16:02:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.570 16:02:07 -- target/tls.sh@28 -- # bdevperf_pid=2473978 00:19:27.570 16:02:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.570 16:02:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.570 16:02:07 -- target/tls.sh@31 -- # waitforlisten 2473978 /var/tmp/bdevperf.sock 00:19:27.570 16:02:07 -- common/autotest_common.sh@817 -- # '[' -z 2473978 ']' 00:19:27.570 16:02:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.570 16:02:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:27.570 16:02:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.570 16:02:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:27.570 16:02:07 -- common/autotest_common.sh@10 -- # set +x 00:19:27.570 [2024-04-26 16:02:07.144268] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:27.570 [2024-04-26 16:02:07.144363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473978 ] 00:19:27.570 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.570 [2024-04-26 16:02:07.245147] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.829 [2024-04-26 16:02:07.472355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.397 16:02:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:28.397 16:02:07 -- common/autotest_common.sh@850 -- # return 0 00:19:28.398 16:02:07 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7lTupJrdXS 00:19:28.398 [2024-04-26 16:02:08.080675] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.398 [2024-04-26 16:02:08.080793] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:28.656 TLSTESTn1 00:19:28.656 16:02:08 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:28.656 Running I/O for 10 seconds... 00:19:38.687 00:19:38.687 Latency(us) 00:19:38.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.687 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.687 Verification LBA range: start 0x0 length 0x2000 00:19:38.687 TLSTESTn1 : 10.06 1795.70 7.01 0.00 0.00 71093.03 8434.20 94827.74 00:19:38.687 =================================================================================================================== 00:19:38.687 Total : 1795.70 7.01 0.00 0.00 71093.03 8434.20 94827.74 00:19:38.687 0 00:19:38.687 16:02:18 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.687 16:02:18 -- target/tls.sh@45 -- # killprocess 2473978 00:19:38.687 16:02:18 -- common/autotest_common.sh@936 -- # '[' -z 2473978 ']' 00:19:38.687 16:02:18 -- common/autotest_common.sh@940 -- # kill -0 2473978 00:19:38.687 16:02:18 -- common/autotest_common.sh@941 -- # uname 00:19:38.687 16:02:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:38.947 16:02:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2473978 00:19:38.947 16:02:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:38.947 16:02:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:38.947 16:02:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2473978' 00:19:38.947 killing process with pid 2473978 00:19:38.947 16:02:18 -- common/autotest_common.sh@955 -- # kill 2473978 00:19:38.947 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.947 00:19:38.947 Latency(us) 00:19:38.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.947 =================================================================================================================== 00:19:38.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.947 [2024-04-26 16:02:18.407919] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:38.947 16:02:18 -- common/autotest_common.sh@960 -- # wait 2473978 00:19:39.883 16:02:19 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.7lTupJrdXS 00:19:39.883 16:02:19 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7lTupJrdXS 00:19:39.883 16:02:19 -- common/autotest_common.sh@638 -- # local es=0 00:19:39.884 16:02:19 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7lTupJrdXS 00:19:39.884 16:02:19 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:39.884 16:02:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:39.884 16:02:19 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:39.884 16:02:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:39.884 16:02:19 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7lTupJrdXS 00:19:39.884 16:02:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:39.884 16:02:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:39.884 16:02:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:39.884 16:02:19 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7lTupJrdXS' 00:19:39.884 16:02:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.884 16:02:19 -- target/tls.sh@28 -- # bdevperf_pid=2475997 00:19:39.884 16:02:19 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.884 16:02:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.884 16:02:19 -- target/tls.sh@31 -- # waitforlisten 2475997 /var/tmp/bdevperf.sock 00:19:39.884 16:02:19 -- common/autotest_common.sh@817 -- # '[' -z 2475997 ']' 00:19:39.884 16:02:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.884 16:02:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:39.884 16:02:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.884 16:02:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:39.884 16:02:19 -- common/autotest_common.sh@10 -- # set +x 00:19:39.884 [2024-04-26 16:02:19.522270] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:39.884 [2024-04-26 16:02:19.522365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475997 ] 00:19:40.143 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.143 [2024-04-26 16:02:19.623784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.402 [2024-04-26 16:02:19.847432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.661 16:02:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:40.661 16:02:20 -- common/autotest_common.sh@850 -- # return 0 00:19:40.661 16:02:20 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7lTupJrdXS 00:19:40.920 [2024-04-26 16:02:20.454411] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.920 [2024-04-26 16:02:20.454475] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:40.920 [2024-04-26 16:02:20.454487] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.7lTupJrdXS 00:19:40.920 request: 00:19:40.920 { 00:19:40.920 "name": "TLSTEST", 00:19:40.920 "trtype": "tcp", 00:19:40.920 "traddr": "10.0.0.2", 00:19:40.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.920 "adrfam": "ipv4", 00:19:40.920 "trsvcid": "4420", 00:19:40.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.920 "psk": "/tmp/tmp.7lTupJrdXS", 00:19:40.920 "method": "bdev_nvme_attach_controller", 00:19:40.920 "req_id": 1 00:19:40.920 } 00:19:40.920 Got JSON-RPC error response 00:19:40.920 response: 00:19:40.920 { 00:19:40.921 "code": -1, 00:19:40.921 "message": "Operation not permitted" 00:19:40.921 } 00:19:40.921 16:02:20 -- target/tls.sh@36 -- # killprocess 2475997 00:19:40.921 16:02:20 -- common/autotest_common.sh@936 -- # '[' -z 2475997 ']' 00:19:40.921 16:02:20 -- common/autotest_common.sh@940 -- # kill -0 2475997 00:19:40.921 16:02:20 -- common/autotest_common.sh@941 -- # uname 00:19:40.921 16:02:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.921 16:02:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2475997 00:19:40.921 16:02:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:40.921 16:02:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:40.921 16:02:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2475997' 00:19:40.921 killing process with pid 2475997 00:19:40.921 16:02:20 -- common/autotest_common.sh@955 -- # kill 2475997 00:19:40.921 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.921 00:19:40.921 Latency(us) 00:19:40.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.921 =================================================================================================================== 00:19:40.921 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.921 16:02:20 -- common/autotest_common.sh@960 -- # wait 2475997 00:19:41.929 16:02:21 -- target/tls.sh@37 -- # return 1 00:19:41.929 16:02:21 -- common/autotest_common.sh@641 -- # es=1 00:19:41.929 16:02:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:41.929 16:02:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:41.929 16:02:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:41.929 16:02:21 -- target/tls.sh@174 -- # killprocess 2473675 00:19:41.929 16:02:21 -- common/autotest_common.sh@936 -- # '[' -z 2473675 ']' 00:19:41.929 16:02:21 -- common/autotest_common.sh@940 -- # kill -0 2473675 00:19:41.929 16:02:21 -- common/autotest_common.sh@941 -- # uname 00:19:41.929 16:02:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.929 16:02:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2473675 00:19:41.929 16:02:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:41.929 16:02:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:41.929 16:02:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2473675' 00:19:41.929 killing process with pid 2473675 00:19:41.929 16:02:21 -- common/autotest_common.sh@955 -- # kill 2473675 00:19:41.929 [2024-04-26 16:02:21.591995] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:41.929 16:02:21 -- common/autotest_common.sh@960 -- # wait 2473675 00:19:43.305 16:02:22 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:43.305 16:02:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:43.305 16:02:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:43.305 16:02:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.305 16:02:22 -- nvmf/common.sh@470 -- # nvmfpid=2476697 00:19:43.305 16:02:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:43.305 16:02:22 -- nvmf/common.sh@471 -- # waitforlisten 2476697 00:19:43.305 16:02:22 -- common/autotest_common.sh@817 -- # '[' -z 2476697 ']' 00:19:43.305 16:02:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.305 16:02:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:43.305 16:02:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.305 16:02:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:43.305 16:02:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.583 [2024-04-26 16:02:23.036919] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:43.583 [2024-04-26 16:02:23.037023] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.583 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.583 [2024-04-26 16:02:23.145942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.842 [2024-04-26 16:02:23.362295] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.842 [2024-04-26 16:02:23.362341] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.842 [2024-04-26 16:02:23.362353] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.842 [2024-04-26 16:02:23.362363] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.842 [2024-04-26 16:02:23.362373] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.842 [2024-04-26 16:02:23.362400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.410 16:02:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:44.410 16:02:23 -- common/autotest_common.sh@850 -- # return 0 00:19:44.410 16:02:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:44.410 16:02:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:44.410 16:02:23 -- common/autotest_common.sh@10 -- # set +x 00:19:44.410 16:02:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.410 16:02:23 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.7lTupJrdXS 00:19:44.410 16:02:23 -- common/autotest_common.sh@638 -- # local es=0 00:19:44.410 16:02:23 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.7lTupJrdXS 00:19:44.410 16:02:23 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:19:44.410 16:02:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:44.410 16:02:23 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:19:44.410 16:02:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:44.410 16:02:23 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.7lTupJrdXS 00:19:44.410 16:02:23 -- target/tls.sh@49 -- # local key=/tmp/tmp.7lTupJrdXS 00:19:44.410 16:02:23 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.410 [2024-04-26 16:02:23.981668] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.411 16:02:23 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:44.670 16:02:24 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.670 [2024-04-26 16:02:24.334617] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.670 [2024-04-26 16:02:24.334847] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.670 16:02:24 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.928 malloc0 00:19:44.928 16:02:24 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.187 16:02:24 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7lTupJrdXS 00:19:45.446 [2024-04-26 16:02:24.886440] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:45.446 [2024-04-26 16:02:24.886480] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:45.446 [2024-04-26 16:02:24.886506] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:19:45.446 request: 00:19:45.446 { 00:19:45.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.446 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.446 "psk": "/tmp/tmp.7lTupJrdXS", 00:19:45.446 "method": "nvmf_subsystem_add_host", 00:19:45.446 "req_id": 1 00:19:45.446 } 00:19:45.446 Got JSON-RPC error response 00:19:45.446 response: 00:19:45.446 { 00:19:45.446 "code": -32603, 00:19:45.446 "message": "Internal error" 00:19:45.446 } 00:19:45.446 16:02:24 -- common/autotest_common.sh@641 -- # es=1 00:19:45.446 16:02:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:45.446 16:02:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:45.446 16:02:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:45.446 16:02:24 -- target/tls.sh@180 -- # killprocess 2476697 00:19:45.446 16:02:24 -- common/autotest_common.sh@936 -- # '[' -z 2476697 ']' 00:19:45.446 16:02:24 -- common/autotest_common.sh@940 -- # kill -0 2476697 00:19:45.446 16:02:24 -- common/autotest_common.sh@941 -- # uname 00:19:45.446 16:02:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:45.446 16:02:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2476697 00:19:45.446 16:02:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:45.446 16:02:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:45.446 16:02:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2476697' 00:19:45.446 killing process with pid 2476697 00:19:45.446 16:02:24 -- common/autotest_common.sh@955 -- # kill 2476697 00:19:45.446 16:02:24 -- common/autotest_common.sh@960 -- # wait 2476697 00:19:46.823 16:02:26 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.7lTupJrdXS 00:19:46.823 16:02:26 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:46.823 16:02:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:46.823 16:02:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:46.823 16:02:26 -- common/autotest_common.sh@10 -- # set +x 00:19:46.823 16:02:26 -- nvmf/common.sh@470 -- # nvmfpid=2477194 00:19:46.823 16:02:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.823 16:02:26 -- nvmf/common.sh@471 -- # waitforlisten 2477194 00:19:46.823 16:02:26 -- common/autotest_common.sh@817 -- # '[' -z 2477194 ']' 00:19:46.823 16:02:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.823 16:02:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.823 16:02:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.823 16:02:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.823 16:02:26 -- common/autotest_common.sh@10 -- # set +x 00:19:46.823 [2024-04-26 16:02:26.372017] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:46.824 [2024-04-26 16:02:26.372112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.824 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.824 [2024-04-26 16:02:26.479280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.082 [2024-04-26 16:02:26.704637] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.082 [2024-04-26 16:02:26.704679] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.082 [2024-04-26 16:02:26.704690] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.082 [2024-04-26 16:02:26.704701] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.082 [2024-04-26 16:02:26.704711] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.082 [2024-04-26 16:02:26.704746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.650 16:02:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.650 16:02:27 -- common/autotest_common.sh@850 -- # return 0 00:19:47.650 16:02:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:47.650 16:02:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:47.650 16:02:27 -- common/autotest_common.sh@10 -- # set +x 00:19:47.650 16:02:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.650 16:02:27 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.7lTupJrdXS 00:19:47.650 16:02:27 -- target/tls.sh@49 -- # local key=/tmp/tmp.7lTupJrdXS 00:19:47.650 16:02:27 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:47.650 [2024-04-26 16:02:27.322783] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.909 16:02:27 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.909 16:02:27 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:48.167 [2024-04-26 16:02:27.647649] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:48.168 [2024-04-26 16:02:27.647893] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.168 16:02:27 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:48.426 malloc0 00:19:48.426 16:02:27 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:48.426 16:02:28 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7lTupJrdXS 00:19:48.685 [2024-04-26 16:02:28.190145] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:48.685 16:02:28 -- target/tls.sh@188 -- # bdevperf_pid=2477467 00:19:48.685 16:02:28 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.685 16:02:28 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.685 16:02:28 -- target/tls.sh@191 -- # waitforlisten 2477467 /var/tmp/bdevperf.sock 00:19:48.685 16:02:28 -- common/autotest_common.sh@817 -- # '[' -z 2477467 ']' 00:19:48.685 16:02:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.685 16:02:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:48.685 16:02:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.685 16:02:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:48.685 16:02:28 -- common/autotest_common.sh@10 -- # set +x 00:19:48.685 [2024-04-26 16:02:28.276466] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:48.685 [2024-04-26 16:02:28.276582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477467 ] 00:19:48.685 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.944 [2024-04-26 16:02:28.376881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.944 [2024-04-26 16:02:28.603058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.511 16:02:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:49.511 16:02:29 -- common/autotest_common.sh@850 -- # return 0 00:19:49.511 16:02:29 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7lTupJrdXS 00:19:49.769 [2024-04-26 16:02:29.215304] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.769 [2024-04-26 16:02:29.215419] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:49.769 TLSTESTn1 00:19:49.769 16:02:29 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:50.029 16:02:29 -- target/tls.sh@196 -- # tgtconf='{ 00:19:50.029 "subsystems": [ 00:19:50.029 { 00:19:50.029 "subsystem": "keyring", 00:19:50.029 "config": [] 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "subsystem": "iobuf", 00:19:50.029 "config": [ 00:19:50.029 { 00:19:50.029 "method": "iobuf_set_options", 00:19:50.029 "params": { 00:19:50.029 "small_pool_count": 8192, 00:19:50.029 "large_pool_count": 1024, 00:19:50.029 "small_bufsize": 8192, 00:19:50.029 "large_bufsize": 135168 00:19:50.029 } 00:19:50.029 } 00:19:50.029 ] 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "subsystem": "sock", 00:19:50.029 "config": [ 00:19:50.029 { 00:19:50.029 "method": "sock_impl_set_options", 00:19:50.029 "params": { 00:19:50.029 "impl_name": "posix", 00:19:50.029 "recv_buf_size": 2097152, 00:19:50.029 "send_buf_size": 2097152, 00:19:50.029 "enable_recv_pipe": true, 00:19:50.029 "enable_quickack": false, 00:19:50.029 "enable_placement_id": 0, 00:19:50.029 "enable_zerocopy_send_server": true, 00:19:50.029 "enable_zerocopy_send_client": false, 00:19:50.029 "zerocopy_threshold": 0, 00:19:50.029 "tls_version": 0, 00:19:50.029 "enable_ktls": false 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "sock_impl_set_options", 00:19:50.029 "params": { 00:19:50.029 "impl_name": "ssl", 00:19:50.029 "recv_buf_size": 4096, 00:19:50.029 "send_buf_size": 4096, 00:19:50.029 "enable_recv_pipe": true, 00:19:50.029 "enable_quickack": false, 00:19:50.029 "enable_placement_id": 0, 00:19:50.029 "enable_zerocopy_send_server": true, 00:19:50.029 "enable_zerocopy_send_client": false, 00:19:50.029 "zerocopy_threshold": 0, 00:19:50.029 "tls_version": 0, 00:19:50.029 "enable_ktls": false 00:19:50.029 } 00:19:50.029 } 00:19:50.029 ] 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "subsystem": "vmd", 00:19:50.029 "config": [] 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "subsystem": "accel", 00:19:50.029 "config": [ 00:19:50.029 { 00:19:50.029 "method": "accel_set_options", 00:19:50.029 "params": { 00:19:50.029 "small_cache_size": 128, 00:19:50.029 "large_cache_size": 16, 00:19:50.029 "task_count": 2048, 00:19:50.029 "sequence_count": 2048, 00:19:50.029 "buf_count": 2048 00:19:50.029 } 00:19:50.029 } 00:19:50.029 ] 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "subsystem": "bdev", 00:19:50.029 "config": [ 00:19:50.029 { 00:19:50.029 "method": "bdev_set_options", 00:19:50.029 "params": { 00:19:50.029 "bdev_io_pool_size": 65535, 00:19:50.029 "bdev_io_cache_size": 256, 00:19:50.029 "bdev_auto_examine": true, 00:19:50.029 "iobuf_small_cache_size": 128, 00:19:50.029 "iobuf_large_cache_size": 16 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "bdev_raid_set_options", 00:19:50.029 "params": { 00:19:50.029 "process_window_size_kb": 1024 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "bdev_iscsi_set_options", 00:19:50.029 "params": { 00:19:50.029 "timeout_sec": 30 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "bdev_nvme_set_options", 00:19:50.029 "params": { 00:19:50.029 "action_on_timeout": "none", 00:19:50.029 "timeout_us": 0, 00:19:50.029 "timeout_admin_us": 0, 00:19:50.029 "keep_alive_timeout_ms": 10000, 00:19:50.029 "arbitration_burst": 0, 00:19:50.029 "low_priority_weight": 0, 00:19:50.029 "medium_priority_weight": 0, 00:19:50.029 "high_priority_weight": 0, 00:19:50.029 "nvme_adminq_poll_period_us": 10000, 00:19:50.029 "nvme_ioq_poll_period_us": 0, 00:19:50.029 "io_queue_requests": 0, 00:19:50.029 "delay_cmd_submit": true, 00:19:50.029 "transport_retry_count": 4, 00:19:50.029 "bdev_retry_count": 3, 00:19:50.029 "transport_ack_timeout": 0, 00:19:50.029 "ctrlr_loss_timeout_sec": 0, 00:19:50.029 "reconnect_delay_sec": 0, 00:19:50.029 "fast_io_fail_timeout_sec": 0, 00:19:50.029 "disable_auto_failback": false, 00:19:50.029 "generate_uuids": false, 00:19:50.029 "transport_tos": 0, 00:19:50.029 "nvme_error_stat": false, 00:19:50.029 "rdma_srq_size": 0, 00:19:50.029 "io_path_stat": false, 00:19:50.029 "allow_accel_sequence": false, 00:19:50.029 "rdma_max_cq_size": 0, 00:19:50.029 "rdma_cm_event_timeout_ms": 0, 00:19:50.029 "dhchap_digests": [ 00:19:50.029 "sha256", 00:19:50.029 "sha384", 00:19:50.029 "sha512" 00:19:50.029 ], 00:19:50.029 "dhchap_dhgroups": [ 00:19:50.029 "null", 00:19:50.029 "ffdhe2048", 00:19:50.029 "ffdhe3072", 00:19:50.029 "ffdhe4096", 00:19:50.029 "ffdhe6144", 00:19:50.029 "ffdhe8192" 00:19:50.029 ] 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "bdev_nvme_set_hotplug", 00:19:50.029 "params": { 00:19:50.029 "period_us": 100000, 00:19:50.029 "enable": false 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "bdev_malloc_create", 00:19:50.029 "params": { 00:19:50.029 "name": "malloc0", 00:19:50.029 "num_blocks": 8192, 00:19:50.029 "block_size": 4096, 00:19:50.029 "physical_block_size": 4096, 00:19:50.029 "uuid": "35c63915-3051-4ef2-b810-ff96d9ea596d", 00:19:50.029 "optimal_io_boundary": 0 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "bdev_wait_for_examine" 00:19:50.029 } 00:19:50.029 ] 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "subsystem": "nbd", 00:19:50.029 "config": [] 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "subsystem": "scheduler", 00:19:50.029 "config": [ 00:19:50.029 { 00:19:50.029 "method": "framework_set_scheduler", 00:19:50.029 "params": { 00:19:50.029 "name": "static" 00:19:50.029 } 00:19:50.029 } 00:19:50.029 ] 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "subsystem": "nvmf", 00:19:50.029 "config": [ 00:19:50.029 { 00:19:50.029 "method": "nvmf_set_config", 00:19:50.029 "params": { 00:19:50.029 "discovery_filter": "match_any", 00:19:50.029 "admin_cmd_passthru": { 00:19:50.029 "identify_ctrlr": false 00:19:50.029 } 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "nvmf_set_max_subsystems", 00:19:50.029 "params": { 00:19:50.029 "max_subsystems": 1024 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "nvmf_set_crdt", 00:19:50.029 "params": { 00:19:50.029 "crdt1": 0, 00:19:50.029 "crdt2": 0, 00:19:50.029 "crdt3": 0 00:19:50.029 } 00:19:50.029 }, 00:19:50.029 { 00:19:50.029 "method": "nvmf_create_transport", 00:19:50.030 "params": { 00:19:50.030 "trtype": "TCP", 00:19:50.030 "max_queue_depth": 128, 00:19:50.030 "max_io_qpairs_per_ctrlr": 127, 00:19:50.030 "in_capsule_data_size": 4096, 00:19:50.030 "max_io_size": 131072, 00:19:50.030 "io_unit_size": 131072, 00:19:50.030 "max_aq_depth": 128, 00:19:50.030 "num_shared_buffers": 511, 00:19:50.030 "buf_cache_size": 4294967295, 00:19:50.030 "dif_insert_or_strip": false, 00:19:50.030 "zcopy": false, 00:19:50.030 "c2h_success": false, 00:19:50.030 "sock_priority": 0, 00:19:50.030 "abort_timeout_sec": 1, 00:19:50.030 "ack_timeout": 0, 00:19:50.030 "data_wr_pool_size": 0 00:19:50.030 } 00:19:50.030 }, 00:19:50.030 { 00:19:50.030 "method": "nvmf_create_subsystem", 00:19:50.030 "params": { 00:19:50.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.030 "allow_any_host": false, 00:19:50.030 "serial_number": "SPDK00000000000001", 00:19:50.030 "model_number": "SPDK bdev Controller", 00:19:50.030 "max_namespaces": 10, 00:19:50.030 "min_cntlid": 1, 00:19:50.030 "max_cntlid": 65519, 00:19:50.030 "ana_reporting": false 00:19:50.030 } 00:19:50.030 }, 00:19:50.030 { 00:19:50.030 "method": "nvmf_subsystem_add_host", 00:19:50.030 "params": { 00:19:50.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.030 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.030 "psk": "/tmp/tmp.7lTupJrdXS" 00:19:50.030 } 00:19:50.030 }, 00:19:50.030 { 00:19:50.030 "method": "nvmf_subsystem_add_ns", 00:19:50.030 "params": { 00:19:50.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.030 "namespace": { 00:19:50.030 "nsid": 1, 00:19:50.030 "bdev_name": "malloc0", 00:19:50.030 "nguid": "35C6391530514EF2B810FF96D9EA596D", 00:19:50.030 "uuid": "35c63915-3051-4ef2-b810-ff96d9ea596d", 00:19:50.030 "no_auto_visible": false 00:19:50.030 } 00:19:50.030 } 00:19:50.030 }, 00:19:50.030 { 00:19:50.030 "method": "nvmf_subsystem_add_listener", 00:19:50.030 "params": { 00:19:50.030 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.030 "listen_address": { 00:19:50.030 "trtype": "TCP", 00:19:50.030 "adrfam": "IPv4", 00:19:50.030 "traddr": "10.0.0.2", 00:19:50.030 "trsvcid": "4420" 00:19:50.030 }, 00:19:50.030 "secure_channel": true 00:19:50.030 } 00:19:50.030 } 00:19:50.030 ] 00:19:50.030 } 00:19:50.030 ] 00:19:50.030 }' 00:19:50.030 16:02:29 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:50.289 16:02:29 -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:50.289 "subsystems": [ 00:19:50.289 { 00:19:50.289 "subsystem": "keyring", 00:19:50.289 "config": [] 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "subsystem": "iobuf", 00:19:50.289 "config": [ 00:19:50.289 { 00:19:50.289 "method": "iobuf_set_options", 00:19:50.289 "params": { 00:19:50.289 "small_pool_count": 8192, 00:19:50.289 "large_pool_count": 1024, 00:19:50.289 "small_bufsize": 8192, 00:19:50.289 "large_bufsize": 135168 00:19:50.289 } 00:19:50.289 } 00:19:50.289 ] 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "subsystem": "sock", 00:19:50.289 "config": [ 00:19:50.289 { 00:19:50.289 "method": "sock_impl_set_options", 00:19:50.289 "params": { 00:19:50.289 "impl_name": "posix", 00:19:50.289 "recv_buf_size": 2097152, 00:19:50.289 "send_buf_size": 2097152, 00:19:50.289 "enable_recv_pipe": true, 00:19:50.289 "enable_quickack": false, 00:19:50.289 "enable_placement_id": 0, 00:19:50.289 "enable_zerocopy_send_server": true, 00:19:50.289 "enable_zerocopy_send_client": false, 00:19:50.289 "zerocopy_threshold": 0, 00:19:50.289 "tls_version": 0, 00:19:50.289 "enable_ktls": false 00:19:50.289 } 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "method": "sock_impl_set_options", 00:19:50.289 "params": { 00:19:50.289 "impl_name": "ssl", 00:19:50.289 "recv_buf_size": 4096, 00:19:50.289 "send_buf_size": 4096, 00:19:50.289 "enable_recv_pipe": true, 00:19:50.289 "enable_quickack": false, 00:19:50.289 "enable_placement_id": 0, 00:19:50.289 "enable_zerocopy_send_server": true, 00:19:50.289 "enable_zerocopy_send_client": false, 00:19:50.289 "zerocopy_threshold": 0, 00:19:50.289 "tls_version": 0, 00:19:50.289 "enable_ktls": false 00:19:50.289 } 00:19:50.289 } 00:19:50.289 ] 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "subsystem": "vmd", 00:19:50.289 "config": [] 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "subsystem": "accel", 00:19:50.289 "config": [ 00:19:50.289 { 00:19:50.289 "method": "accel_set_options", 00:19:50.289 "params": { 00:19:50.289 "small_cache_size": 128, 00:19:50.289 "large_cache_size": 16, 00:19:50.289 "task_count": 2048, 00:19:50.289 "sequence_count": 2048, 00:19:50.289 "buf_count": 2048 00:19:50.289 } 00:19:50.289 } 00:19:50.289 ] 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "subsystem": "bdev", 00:19:50.289 "config": [ 00:19:50.289 { 00:19:50.289 "method": "bdev_set_options", 00:19:50.289 "params": { 00:19:50.289 "bdev_io_pool_size": 65535, 00:19:50.289 "bdev_io_cache_size": 256, 00:19:50.289 "bdev_auto_examine": true, 00:19:50.289 "iobuf_small_cache_size": 128, 00:19:50.289 "iobuf_large_cache_size": 16 00:19:50.289 } 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "method": "bdev_raid_set_options", 00:19:50.289 "params": { 00:19:50.289 "process_window_size_kb": 1024 00:19:50.289 } 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "method": "bdev_iscsi_set_options", 00:19:50.289 "params": { 00:19:50.289 "timeout_sec": 30 00:19:50.289 } 00:19:50.289 }, 00:19:50.289 { 00:19:50.289 "method": "bdev_nvme_set_options", 00:19:50.289 "params": { 00:19:50.289 "action_on_timeout": "none", 00:19:50.289 "timeout_us": 0, 00:19:50.289 "timeout_admin_us": 0, 00:19:50.289 "keep_alive_timeout_ms": 10000, 00:19:50.289 "arbitration_burst": 0, 00:19:50.289 "low_priority_weight": 0, 00:19:50.289 "medium_priority_weight": 0, 00:19:50.289 "high_priority_weight": 0, 00:19:50.289 "nvme_adminq_poll_period_us": 10000, 00:19:50.289 "nvme_ioq_poll_period_us": 0, 00:19:50.289 "io_queue_requests": 512, 00:19:50.289 "delay_cmd_submit": true, 00:19:50.289 "transport_retry_count": 4, 00:19:50.289 "bdev_retry_count": 3, 00:19:50.289 "transport_ack_timeout": 0, 00:19:50.289 "ctrlr_loss_timeout_sec": 0, 00:19:50.289 "reconnect_delay_sec": 0, 00:19:50.289 "fast_io_fail_timeout_sec": 0, 00:19:50.289 "disable_auto_failback": false, 00:19:50.289 "generate_uuids": false, 00:19:50.289 "transport_tos": 0, 00:19:50.289 "nvme_error_stat": false, 00:19:50.289 "rdma_srq_size": 0, 00:19:50.289 "io_path_stat": false, 00:19:50.289 "allow_accel_sequence": false, 00:19:50.290 "rdma_max_cq_size": 0, 00:19:50.290 "rdma_cm_event_timeout_ms": 0, 00:19:50.290 "dhchap_digests": [ 00:19:50.290 "sha256", 00:19:50.290 "sha384", 00:19:50.290 "sha512" 00:19:50.290 ], 00:19:50.290 "dhchap_dhgroups": [ 00:19:50.290 "null", 00:19:50.290 "ffdhe2048", 00:19:50.290 "ffdhe3072", 00:19:50.290 "ffdhe4096", 00:19:50.290 "ffdhe6144", 00:19:50.290 "ffdhe8192" 00:19:50.290 ] 00:19:50.290 } 00:19:50.290 }, 00:19:50.290 { 00:19:50.290 "method": "bdev_nvme_attach_controller", 00:19:50.290 "params": { 00:19:50.290 "name": "TLSTEST", 00:19:50.290 "trtype": "TCP", 00:19:50.290 "adrfam": "IPv4", 00:19:50.290 "traddr": "10.0.0.2", 00:19:50.290 "trsvcid": "4420", 00:19:50.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.290 "prchk_reftag": false, 00:19:50.290 "prchk_guard": false, 00:19:50.290 "ctrlr_loss_timeout_sec": 0, 00:19:50.290 "reconnect_delay_sec": 0, 00:19:50.290 "fast_io_fail_timeout_sec": 0, 00:19:50.290 "psk": "/tmp/tmp.7lTupJrdXS", 00:19:50.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.290 "hdgst": false, 00:19:50.290 "ddgst": false 00:19:50.290 } 00:19:50.290 }, 00:19:50.290 { 00:19:50.290 "method": "bdev_nvme_set_hotplug", 00:19:50.290 "params": { 00:19:50.290 "period_us": 100000, 00:19:50.290 "enable": false 00:19:50.290 } 00:19:50.290 }, 00:19:50.290 { 00:19:50.290 "method": "bdev_wait_for_examine" 00:19:50.290 } 00:19:50.290 ] 00:19:50.290 }, 00:19:50.290 { 00:19:50.290 "subsystem": "nbd", 00:19:50.290 "config": [] 00:19:50.290 } 00:19:50.290 ] 00:19:50.290 }' 00:19:50.290 16:02:29 -- target/tls.sh@199 -- # killprocess 2477467 00:19:50.290 16:02:29 -- common/autotest_common.sh@936 -- # '[' -z 2477467 ']' 00:19:50.290 16:02:29 -- common/autotest_common.sh@940 -- # kill -0 2477467 00:19:50.290 16:02:29 -- common/autotest_common.sh@941 -- # uname 00:19:50.290 16:02:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.290 16:02:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2477467 00:19:50.290 16:02:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:50.290 16:02:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:50.290 16:02:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2477467' 00:19:50.290 killing process with pid 2477467 00:19:50.290 16:02:29 -- common/autotest_common.sh@955 -- # kill 2477467 00:19:50.290 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.290 00:19:50.290 Latency(us) 00:19:50.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.290 =================================================================================================================== 00:19:50.290 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.290 [2024-04-26 16:02:29.861236] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:50.290 16:02:29 -- common/autotest_common.sh@960 -- # wait 2477467 00:19:51.226 16:02:30 -- target/tls.sh@200 -- # killprocess 2477194 00:19:51.226 16:02:30 -- common/autotest_common.sh@936 -- # '[' -z 2477194 ']' 00:19:51.226 16:02:30 -- common/autotest_common.sh@940 -- # kill -0 2477194 00:19:51.226 16:02:30 -- common/autotest_common.sh@941 -- # uname 00:19:51.226 16:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:51.226 16:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2477194 00:19:51.485 16:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:51.485 16:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:51.485 16:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2477194' 00:19:51.485 killing process with pid 2477194 00:19:51.485 16:02:30 -- common/autotest_common.sh@955 -- # kill 2477194 00:19:51.485 [2024-04-26 16:02:30.923720] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:51.485 16:02:30 -- common/autotest_common.sh@960 -- # wait 2477194 00:19:52.862 16:02:32 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:52.862 16:02:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:52.862 16:02:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.862 16:02:32 -- target/tls.sh@203 -- # echo '{ 00:19:52.863 "subsystems": [ 00:19:52.863 { 00:19:52.863 "subsystem": "keyring", 00:19:52.863 "config": [] 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "subsystem": "iobuf", 00:19:52.863 "config": [ 00:19:52.863 { 00:19:52.863 "method": "iobuf_set_options", 00:19:52.863 "params": { 00:19:52.863 "small_pool_count": 8192, 00:19:52.863 "large_pool_count": 1024, 00:19:52.863 "small_bufsize": 8192, 00:19:52.863 "large_bufsize": 135168 00:19:52.863 } 00:19:52.863 } 00:19:52.863 ] 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "subsystem": "sock", 00:19:52.863 "config": [ 00:19:52.863 { 00:19:52.863 "method": "sock_impl_set_options", 00:19:52.863 "params": { 00:19:52.863 "impl_name": "posix", 00:19:52.863 "recv_buf_size": 2097152, 00:19:52.863 "send_buf_size": 2097152, 00:19:52.863 "enable_recv_pipe": true, 00:19:52.863 "enable_quickack": false, 00:19:52.863 "enable_placement_id": 0, 00:19:52.863 "enable_zerocopy_send_server": true, 00:19:52.863 "enable_zerocopy_send_client": false, 00:19:52.863 "zerocopy_threshold": 0, 00:19:52.863 "tls_version": 0, 00:19:52.863 "enable_ktls": false 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "sock_impl_set_options", 00:19:52.863 "params": { 00:19:52.863 "impl_name": "ssl", 00:19:52.863 "recv_buf_size": 4096, 00:19:52.863 "send_buf_size": 4096, 00:19:52.863 "enable_recv_pipe": true, 00:19:52.863 "enable_quickack": false, 00:19:52.863 "enable_placement_id": 0, 00:19:52.863 "enable_zerocopy_send_server": true, 00:19:52.863 "enable_zerocopy_send_client": false, 00:19:52.863 "zerocopy_threshold": 0, 00:19:52.863 "tls_version": 0, 00:19:52.863 "enable_ktls": false 00:19:52.863 } 00:19:52.863 } 00:19:52.863 ] 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "subsystem": "vmd", 00:19:52.863 "config": [] 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "subsystem": "accel", 00:19:52.863 "config": [ 00:19:52.863 { 00:19:52.863 "method": "accel_set_options", 00:19:52.863 "params": { 00:19:52.863 "small_cache_size": 128, 00:19:52.863 "large_cache_size": 16, 00:19:52.863 "task_count": 2048, 00:19:52.863 "sequence_count": 2048, 00:19:52.863 "buf_count": 2048 00:19:52.863 } 00:19:52.863 } 00:19:52.863 ] 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "subsystem": "bdev", 00:19:52.863 "config": [ 00:19:52.863 { 00:19:52.863 "method": "bdev_set_options", 00:19:52.863 "params": { 00:19:52.863 "bdev_io_pool_size": 65535, 00:19:52.863 "bdev_io_cache_size": 256, 00:19:52.863 "bdev_auto_examine": true, 00:19:52.863 "iobuf_small_cache_size": 128, 00:19:52.863 "iobuf_large_cache_size": 16 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "bdev_raid_set_options", 00:19:52.863 "params": { 00:19:52.863 "process_window_size_kb": 1024 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "bdev_iscsi_set_options", 00:19:52.863 "params": { 00:19:52.863 "timeout_sec": 30 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "bdev_nvme_set_options", 00:19:52.863 "params": { 00:19:52.863 "action_on_timeout": "none", 00:19:52.863 "timeout_us": 0, 00:19:52.863 "timeout_admin_us": 0, 00:19:52.863 "keep_alive_timeout_ms": 10000, 00:19:52.863 "arbitration_burst": 0, 00:19:52.863 "low_priority_weight": 0, 00:19:52.863 "medium_priority_weight": 0, 00:19:52.863 "high_priority_weight": 0, 00:19:52.863 "nvme_adminq_poll_period_us": 10000, 00:19:52.863 "nvme_ioq_poll_period_us": 0, 00:19:52.863 "io_queue_requests": 0, 00:19:52.863 "delay_cmd_submit": true, 00:19:52.863 "transport_retry_count": 4, 00:19:52.863 "bdev_retry_count": 3, 00:19:52.863 "transport_ack_timeout": 0, 00:19:52.863 "ctrlr_loss_timeout_sec": 0, 00:19:52.863 "reconnect_delay_sec": 0, 00:19:52.863 "fast_io_fail_timeout_sec": 0, 00:19:52.863 "disable_auto_failback": false, 00:19:52.863 "generate_uuids": false, 00:19:52.863 "transport_tos": 0, 00:19:52.863 "nvme_error_stat": false, 00:19:52.863 "rdma_srq_size": 0, 00:19:52.863 "io_path_stat": false, 00:19:52.863 "allow_accel_sequence": false, 00:19:52.863 "rdma_max_cq_size": 0, 00:19:52.863 "rdma_cm_event_timeout_ms": 0, 00:19:52.863 "dhchap_digests": [ 00:19:52.863 "sha256", 00:19:52.863 "sha384", 00:19:52.863 "sha512" 00:19:52.863 ], 00:19:52.863 "dhchap_dhgroups": [ 00:19:52.863 "null", 00:19:52.863 "ffdhe2048", 00:19:52.863 "ffdhe3072", 00:19:52.863 "ffdhe4096", 00:19:52.863 "ffdhe6144", 00:19:52.863 "ffdhe8192" 00:19:52.863 ] 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "bdev_nvme_set_hotplug", 00:19:52.863 "params": { 00:19:52.863 "period_us": 100000, 00:19:52.863 "enable": false 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "bdev_malloc_create", 00:19:52.863 "params": { 00:19:52.863 "name": "malloc0", 00:19:52.863 "num_blocks": 8192, 00:19:52.863 "block_size": 4096, 00:19:52.863 "physical_block_size": 4096, 00:19:52.863 "uuid": "35c63915-3051-4ef2-b810-ff96d9ea596d", 00:19:52.863 "optimal_io_boundary": 0 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "bdev_wait_for_examine" 00:19:52.863 } 00:19:52.863 ] 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "subsystem": "nbd", 00:19:52.863 "config": [] 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "subsystem": "scheduler", 00:19:52.863 "config": [ 00:19:52.863 { 00:19:52.863 "method": "framework_set_scheduler", 00:19:52.863 "params": { 00:19:52.863 "name": "static" 00:19:52.863 } 00:19:52.863 } 00:19:52.863 ] 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "subsystem": "nvmf", 00:19:52.863 "config": [ 00:19:52.863 { 00:19:52.863 "method": "nvmf_set_config", 00:19:52.863 "params": { 00:19:52.863 "discovery_filter": "match_any", 00:19:52.863 "admin_cmd_passthru": { 00:19:52.863 "identify_ctrlr": false 00:19:52.863 } 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "nvmf_set_max_subsystems", 00:19:52.863 "params": { 00:19:52.863 "max_subsystems": 1024 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "nvmf_set_crdt", 00:19:52.863 "params": { 00:19:52.863 "crdt1": 0, 00:19:52.863 "crdt2": 0, 00:19:52.863 "crdt3": 0 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "nvmf_create_transport", 00:19:52.863 "params": { 00:19:52.863 "trtype": "TCP", 00:19:52.863 "max_queue_depth": 128, 00:19:52.863 "max_io_qpairs_per_ctrlr": 127, 00:19:52.863 "in_capsule_data_size": 4096, 00:19:52.863 "max_io_size": 131072, 00:19:52.863 "io_unit_size": 131072, 00:19:52.863 "max_aq_depth": 128, 00:19:52.863 "num_shared_buffers": 511, 00:19:52.863 "buf_cache_size": 4294967295, 00:19:52.863 "dif_insert_or_strip": false, 00:19:52.863 "zcopy": false, 00:19:52.863 "c2h_success": false, 00:19:52.863 "sock_priority": 0, 00:19:52.863 "abort_timeout_sec": 1, 00:19:52.863 "ack_timeout": 0, 00:19:52.863 "data_wr_pool_size": 0 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "nvmf_create_subsystem", 00:19:52.863 "params": { 00:19:52.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.863 "allow_any_host": false, 00:19:52.863 "serial_number": "SPDK00000000000001", 00:19:52.863 "model_number": "SPDK bdev Controller", 00:19:52.863 "max_namespaces": 10, 00:19:52.863 "min_cntlid": 1, 00:19:52.863 "max_cntlid": 65519, 00:19:52.863 "ana_reporting": false 00:19:52.863 } 00:19:52.863 }, 00:19:52.863 { 00:19:52.863 "method": "nvmf_subsystem_add_host", 00:19:52.863 "params": { 00:19:52.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.864 "host": "nqn.2016-06.io.spdk:host1", 00:19:52.864 "psk": "/tmp/tmp.7lTupJrdXS" 00:19:52.864 } 00:19:52.864 }, 00:19:52.864 { 00:19:52.864 "method": "nvmf_subsystem_add_ns", 00:19:52.864 "params": { 00:19:52.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.864 "namespace": { 00:19:52.864 "nsid": 1, 00:19:52.864 "bdev_name": "malloc0", 00:19:52.864 "nguid": "35C6391530514EF2B810FF96D9EA596D", 00:19:52.864 "uuid": "35c63915-3051-4ef2-b810-ff96d9ea596d", 00:19:52.864 "no_auto_visible": false 00:19:52.864 } 00:19:52.864 } 00:19:52.864 }, 00:19:52.864 { 00:19:52.864 "method": "nvmf_subsystem_add_listener", 00:19:52.864 "params": { 00:19:52.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.864 "listen_address": { 00:19:52.864 "trtype": "TCP", 00:19:52.864 "adrfam": "IPv4", 00:19:52.864 "traddr": "10.0.0.2", 00:19:52.864 "trsvcid": "4420" 00:19:52.864 }, 00:19:52.864 "secure_channel": true 00:19:52.864 } 00:19:52.864 } 00:19:52.864 ] 00:19:52.864 } 00:19:52.864 ] 00:19:52.864 }' 00:19:52.864 16:02:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.864 16:02:32 -- nvmf/common.sh@470 -- # nvmfpid=2478162 00:19:52.864 16:02:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:52.864 16:02:32 -- nvmf/common.sh@471 -- # waitforlisten 2478162 00:19:52.864 16:02:32 -- common/autotest_common.sh@817 -- # '[' -z 2478162 ']' 00:19:52.864 16:02:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.864 16:02:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.864 16:02:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.864 16:02:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.864 16:02:32 -- common/autotest_common.sh@10 -- # set +x 00:19:52.864 [2024-04-26 16:02:32.345977] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:52.864 [2024-04-26 16:02:32.346083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.864 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.864 [2024-04-26 16:02:32.454304] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.123 [2024-04-26 16:02:32.668354] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.123 [2024-04-26 16:02:32.668403] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.123 [2024-04-26 16:02:32.668414] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.123 [2024-04-26 16:02:32.668424] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.123 [2024-04-26 16:02:32.668434] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.123 [2024-04-26 16:02:32.668523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.691 [2024-04-26 16:02:33.217113] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.691 [2024-04-26 16:02:33.233067] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:53.691 [2024-04-26 16:02:33.249137] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.691 [2024-04-26 16:02:33.249345] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.691 16:02:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.691 16:02:33 -- common/autotest_common.sh@850 -- # return 0 00:19:53.691 16:02:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:53.691 16:02:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:53.691 16:02:33 -- common/autotest_common.sh@10 -- # set +x 00:19:53.691 16:02:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.691 16:02:33 -- target/tls.sh@207 -- # bdevperf_pid=2478409 00:19:53.691 16:02:33 -- target/tls.sh@208 -- # waitforlisten 2478409 /var/tmp/bdevperf.sock 00:19:53.691 16:02:33 -- common/autotest_common.sh@817 -- # '[' -z 2478409 ']' 00:19:53.691 16:02:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.691 16:02:33 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:53.691 16:02:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.691 16:02:33 -- target/tls.sh@204 -- # echo '{ 00:19:53.691 "subsystems": [ 00:19:53.691 { 00:19:53.691 "subsystem": "keyring", 00:19:53.691 "config": [] 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "subsystem": "iobuf", 00:19:53.691 "config": [ 00:19:53.691 { 00:19:53.691 "method": "iobuf_set_options", 00:19:53.691 "params": { 00:19:53.691 "small_pool_count": 8192, 00:19:53.691 "large_pool_count": 1024, 00:19:53.691 "small_bufsize": 8192, 00:19:53.691 "large_bufsize": 135168 00:19:53.691 } 00:19:53.691 } 00:19:53.691 ] 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "subsystem": "sock", 00:19:53.691 "config": [ 00:19:53.691 { 00:19:53.691 "method": "sock_impl_set_options", 00:19:53.691 "params": { 00:19:53.691 "impl_name": "posix", 00:19:53.691 "recv_buf_size": 2097152, 00:19:53.691 "send_buf_size": 2097152, 00:19:53.691 "enable_recv_pipe": true, 00:19:53.691 "enable_quickack": false, 00:19:53.691 "enable_placement_id": 0, 00:19:53.691 "enable_zerocopy_send_server": true, 00:19:53.691 "enable_zerocopy_send_client": false, 00:19:53.691 "zerocopy_threshold": 0, 00:19:53.691 "tls_version": 0, 00:19:53.691 "enable_ktls": false 00:19:53.691 } 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "method": "sock_impl_set_options", 00:19:53.691 "params": { 00:19:53.691 "impl_name": "ssl", 00:19:53.691 "recv_buf_size": 4096, 00:19:53.691 "send_buf_size": 4096, 00:19:53.691 "enable_recv_pipe": true, 00:19:53.691 "enable_quickack": false, 00:19:53.691 "enable_placement_id": 0, 00:19:53.691 "enable_zerocopy_send_server": true, 00:19:53.691 "enable_zerocopy_send_client": false, 00:19:53.691 "zerocopy_threshold": 0, 00:19:53.691 "tls_version": 0, 00:19:53.691 "enable_ktls": false 00:19:53.691 } 00:19:53.691 } 00:19:53.691 ] 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "subsystem": "vmd", 00:19:53.691 "config": [] 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "subsystem": "accel", 00:19:53.691 "config": [ 00:19:53.691 { 00:19:53.691 "method": "accel_set_options", 00:19:53.691 "params": { 00:19:53.691 "small_cache_size": 128, 00:19:53.691 "large_cache_size": 16, 00:19:53.691 "task_count": 2048, 00:19:53.691 "sequence_count": 2048, 00:19:53.691 "buf_count": 2048 00:19:53.691 } 00:19:53.691 } 00:19:53.691 ] 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "subsystem": "bdev", 00:19:53.691 "config": [ 00:19:53.691 { 00:19:53.691 "method": "bdev_set_options", 00:19:53.691 "params": { 00:19:53.691 "bdev_io_pool_size": 65535, 00:19:53.691 "bdev_io_cache_size": 256, 00:19:53.691 "bdev_auto_examine": true, 00:19:53.691 "iobuf_small_cache_size": 128, 00:19:53.691 "iobuf_large_cache_size": 16 00:19:53.691 } 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "method": "bdev_raid_set_options", 00:19:53.691 "params": { 00:19:53.691 "process_window_size_kb": 1024 00:19:53.691 } 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "method": "bdev_iscsi_set_options", 00:19:53.691 "params": { 00:19:53.691 "timeout_sec": 30 00:19:53.691 } 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "method": "bdev_nvme_set_options", 00:19:53.691 "params": { 00:19:53.691 "action_on_timeout": "none", 00:19:53.691 "timeout_us": 0, 00:19:53.691 "timeout_admin_us": 0, 00:19:53.691 "keep_alive_timeout_ms": 10000, 00:19:53.691 "arbitration_burst": 0, 00:19:53.691 "low_priority_weight": 0, 00:19:53.691 "medium_priority_weight": 0, 00:19:53.691 "high_priority_weight": 0, 00:19:53.691 "nvme_adminq_poll_period_us": 10000, 00:19:53.691 "nvme_ioq_poll_period_us": 0, 00:19:53.691 "io_queue_requests": 512, 00:19:53.691 "delay_cmd_submit": true, 00:19:53.691 "transport_retry_count": 4, 00:19:53.691 "bdev_retry_count": 3, 00:19:53.691 "transport_ack_timeout": 0, 00:19:53.691 "ctrlr_loss_timeout_sec": 0, 00:19:53.691 "reconnect_delay_sec": 0, 00:19:53.691 "fast_io_fail_timeout_sec": 0, 00:19:53.691 "disable_auto_failback": false, 00:19:53.691 "generate_uuids": false, 00:19:53.691 "transport_tos": 0, 00:19:53.691 "nvme_error_stat": false, 00:19:53.691 "rdma_srq_size": 0, 00:19:53.691 "io_path_stat": false, 00:19:53.691 "allow_accel_sequence": false, 00:19:53.691 "rdma_max_cq_size": 0, 00:19:53.691 "rdma_cm_event_timeout_ms": 0, 00:19:53.691 "dhchap_digests": [ 00:19:53.691 "sha256", 00:19:53.691 "sha384", 00:19:53.691 "sha512" 00:19:53.691 ], 00:19:53.691 "dhchap_dhgroups": [ 00:19:53.691 "null", 00:19:53.691 "ffdhe2048", 00:19:53.691 "ffdhe3072", 00:19:53.691 "ffdhe4096", 00:19:53.691 "ffdhe6144", 00:19:53.691 "ffdhe8192" 00:19:53.691 ] 00:19:53.691 } 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "method": "bdev_nvme_attach_controller", 00:19:53.691 "params" 16:02:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.691 : { 00:19:53.691 "name": "TLSTEST", 00:19:53.691 "trtype": "TCP", 00:19:53.691 "adrfam": "IPv4", 00:19:53.691 "traddr": "10.0.0.2", 00:19:53.691 "trsvcid": "4420", 00:19:53.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.691 "prchk_reftag": false, 00:19:53.691 "prchk_guard": false, 00:19:53.691 "ctrlr_loss_timeout_sec": 0, 00:19:53.691 "reconnect_delay_sec": 0, 00:19:53.691 "fast_io_fail_timeout_sec": 0, 00:19:53.691 "psk": "/tmp/tmp.7lTupJrdXS", 00:19:53.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.691 "hdgst": false, 00:19:53.691 "ddgst": false 00:19:53.691 } 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "method": "bdev_nvme_set_hotplug", 00:19:53.691 "params": { 00:19:53.691 "period_us": 100000, 00:19:53.691 "enable": false 00:19:53.691 } 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "method": "bdev_wait_for_examine" 00:19:53.691 } 00:19:53.691 ] 00:19:53.691 }, 00:19:53.691 { 00:19:53.691 "subsystem": "nbd", 00:19:53.691 "config": [] 00:19:53.691 } 00:19:53.691 ] 00:19:53.691 }' 00:19:53.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.691 16:02:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.691 16:02:33 -- common/autotest_common.sh@10 -- # set +x 00:19:53.949 [2024-04-26 16:02:33.383821] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:53.949 [2024-04-26 16:02:33.383912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478409 ] 00:19:53.949 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.949 [2024-04-26 16:02:33.483525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.208 [2024-04-26 16:02:33.707995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.780 [2024-04-26 16:02:34.150455] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.780 [2024-04-26 16:02:34.150592] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:54.780 16:02:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:54.780 16:02:34 -- common/autotest_common.sh@850 -- # return 0 00:19:54.780 16:02:34 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.780 Running I/O for 10 seconds... 00:20:06.985 00:20:06.985 Latency(us) 00:20:06.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.985 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:06.985 Verification LBA range: start 0x0 length 0x2000 00:20:06.985 TLSTESTn1 : 10.06 1794.49 7.01 0.00 0.00 71142.54 8491.19 102578.09 00:20:06.985 =================================================================================================================== 00:20:06.985 Total : 1794.49 7.01 0.00 0.00 71142.54 8491.19 102578.09 00:20:06.985 0 00:20:06.985 16:02:44 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:06.985 16:02:44 -- target/tls.sh@214 -- # killprocess 2478409 00:20:06.985 16:02:44 -- common/autotest_common.sh@936 -- # '[' -z 2478409 ']' 00:20:06.985 16:02:44 -- common/autotest_common.sh@940 -- # kill -0 2478409 00:20:06.985 16:02:44 -- common/autotest_common.sh@941 -- # uname 00:20:06.985 16:02:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.985 16:02:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2478409 00:20:06.985 16:02:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:06.985 16:02:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:06.985 16:02:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2478409' 00:20:06.985 killing process with pid 2478409 00:20:06.985 16:02:44 -- common/autotest_common.sh@955 -- # kill 2478409 00:20:06.985 Received shutdown signal, test time was about 10.000000 seconds 00:20:06.985 00:20:06.985 Latency(us) 00:20:06.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.985 =================================================================================================================== 00:20:06.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.985 [2024-04-26 16:02:44.510450] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:06.985 16:02:44 -- common/autotest_common.sh@960 -- # wait 2478409 00:20:06.985 16:02:45 -- target/tls.sh@215 -- # killprocess 2478162 00:20:06.985 16:02:45 -- common/autotest_common.sh@936 -- # '[' -z 2478162 ']' 00:20:06.985 16:02:45 -- common/autotest_common.sh@940 -- # kill -0 2478162 00:20:06.985 16:02:45 -- common/autotest_common.sh@941 -- # uname 00:20:06.985 16:02:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.985 16:02:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2478162 00:20:06.985 16:02:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:06.985 16:02:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:06.985 16:02:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2478162' 00:20:06.985 killing process with pid 2478162 00:20:06.985 16:02:45 -- common/autotest_common.sh@955 -- # kill 2478162 00:20:06.985 [2024-04-26 16:02:45.601871] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:06.985 16:02:45 -- common/autotest_common.sh@960 -- # wait 2478162 00:20:07.552 16:02:46 -- target/tls.sh@218 -- # nvmfappstart 00:20:07.552 16:02:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:07.552 16:02:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:07.552 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:07.552 16:02:46 -- nvmf/common.sh@470 -- # nvmfpid=2480696 00:20:07.552 16:02:46 -- nvmf/common.sh@471 -- # waitforlisten 2480696 00:20:07.552 16:02:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:07.552 16:02:46 -- common/autotest_common.sh@817 -- # '[' -z 2480696 ']' 00:20:07.552 16:02:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.552 16:02:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:07.552 16:02:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.552 16:02:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:07.552 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:07.552 [2024-04-26 16:02:47.044435] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:07.552 [2024-04-26 16:02:47.044537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.552 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.552 [2024-04-26 16:02:47.152370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.810 [2024-04-26 16:02:47.366867] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.810 [2024-04-26 16:02:47.366917] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.810 [2024-04-26 16:02:47.366930] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.810 [2024-04-26 16:02:47.366941] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.810 [2024-04-26 16:02:47.366952] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.810 [2024-04-26 16:02:47.366986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.377 16:02:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:08.377 16:02:47 -- common/autotest_common.sh@850 -- # return 0 00:20:08.377 16:02:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:08.377 16:02:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:08.377 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:20:08.377 16:02:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.377 16:02:47 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.7lTupJrdXS 00:20:08.377 16:02:47 -- target/tls.sh@49 -- # local key=/tmp/tmp.7lTupJrdXS 00:20:08.377 16:02:47 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:08.377 [2024-04-26 16:02:48.004793] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.377 16:02:48 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:08.636 16:02:48 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:08.895 [2024-04-26 16:02:48.349718] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.895 [2024-04-26 16:02:48.349950] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.895 16:02:48 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:08.895 malloc0 00:20:09.152 16:02:48 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:09.152 16:02:48 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7lTupJrdXS 00:20:09.410 [2024-04-26 16:02:48.883147] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:09.410 16:02:48 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:09.410 16:02:48 -- target/tls.sh@222 -- # bdevperf_pid=2480960 00:20:09.410 16:02:48 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.410 16:02:48 -- target/tls.sh@225 -- # waitforlisten 2480960 /var/tmp/bdevperf.sock 00:20:09.410 16:02:48 -- common/autotest_common.sh@817 -- # '[' -z 2480960 ']' 00:20:09.410 16:02:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.410 16:02:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:09.410 16:02:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.410 16:02:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:09.410 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:09.410 [2024-04-26 16:02:48.956122] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:09.410 [2024-04-26 16:02:48.956222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480960 ] 00:20:09.410 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.410 [2024-04-26 16:02:49.059314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.668 [2024-04-26 16:02:49.285218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.233 16:02:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:10.233 16:02:49 -- common/autotest_common.sh@850 -- # return 0 00:20:10.233 16:02:49 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7lTupJrdXS 00:20:10.491 16:02:49 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:10.491 [2024-04-26 16:02:50.064870] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.491 nvme0n1 00:20:10.491 16:02:50 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:10.748 Running I/O for 1 seconds... 00:20:11.682 00:20:11.682 Latency(us) 00:20:11.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.682 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:11.682 Verification LBA range: start 0x0 length 0x2000 00:20:11.682 nvme0n1 : 1.07 1781.99 6.96 0.00 0.00 70148.17 8605.16 94371.84 00:20:11.682 =================================================================================================================== 00:20:11.682 Total : 1781.99 6.96 0.00 0.00 70148.17 8605.16 94371.84 00:20:11.682 0 00:20:11.682 16:02:51 -- target/tls.sh@234 -- # killprocess 2480960 00:20:11.682 16:02:51 -- common/autotest_common.sh@936 -- # '[' -z 2480960 ']' 00:20:11.682 16:02:51 -- common/autotest_common.sh@940 -- # kill -0 2480960 00:20:11.682 16:02:51 -- common/autotest_common.sh@941 -- # uname 00:20:11.682 16:02:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:11.682 16:02:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2480960 00:20:11.940 16:02:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:11.940 16:02:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:11.940 16:02:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2480960' 00:20:11.940 killing process with pid 2480960 00:20:11.940 16:02:51 -- common/autotest_common.sh@955 -- # kill 2480960 00:20:11.940 Received shutdown signal, test time was about 1.000000 seconds 00:20:11.940 00:20:11.940 Latency(us) 00:20:11.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.940 =================================================================================================================== 00:20:11.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.940 16:02:51 -- common/autotest_common.sh@960 -- # wait 2480960 00:20:12.874 16:02:52 -- target/tls.sh@235 -- # killprocess 2480696 00:20:12.874 16:02:52 -- common/autotest_common.sh@936 -- # '[' -z 2480696 ']' 00:20:12.874 16:02:52 -- common/autotest_common.sh@940 -- # kill -0 2480696 00:20:12.874 16:02:52 -- common/autotest_common.sh@941 -- # uname 00:20:12.874 16:02:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:12.874 16:02:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2480696 00:20:12.874 16:02:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:12.874 16:02:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:12.874 16:02:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2480696' 00:20:12.874 killing process with pid 2480696 00:20:12.874 16:02:52 -- common/autotest_common.sh@955 -- # kill 2480696 00:20:12.874 [2024-04-26 16:02:52.449404] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:12.874 16:02:52 -- common/autotest_common.sh@960 -- # wait 2480696 00:20:14.248 16:02:53 -- target/tls.sh@238 -- # nvmfappstart 00:20:14.248 16:02:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:14.248 16:02:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:14.248 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:20:14.248 16:02:53 -- nvmf/common.sh@470 -- # nvmfpid=2481838 00:20:14.248 16:02:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:14.248 16:02:53 -- nvmf/common.sh@471 -- # waitforlisten 2481838 00:20:14.248 16:02:53 -- common/autotest_common.sh@817 -- # '[' -z 2481838 ']' 00:20:14.248 16:02:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.248 16:02:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:14.248 16:02:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.248 16:02:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:14.248 16:02:53 -- common/autotest_common.sh@10 -- # set +x 00:20:14.248 [2024-04-26 16:02:53.884207] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:14.248 [2024-04-26 16:02:53.884316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.507 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.507 [2024-04-26 16:02:53.994550] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.765 [2024-04-26 16:02:54.210609] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.765 [2024-04-26 16:02:54.210654] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.765 [2024-04-26 16:02:54.210664] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.765 [2024-04-26 16:02:54.210674] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.765 [2024-04-26 16:02:54.210684] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.765 [2024-04-26 16:02:54.210714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.023 16:02:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:15.023 16:02:54 -- common/autotest_common.sh@850 -- # return 0 00:20:15.023 16:02:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:15.023 16:02:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:15.023 16:02:54 -- common/autotest_common.sh@10 -- # set +x 00:20:15.023 16:02:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.023 16:02:54 -- target/tls.sh@239 -- # rpc_cmd 00:20:15.023 16:02:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.023 16:02:54 -- common/autotest_common.sh@10 -- # set +x 00:20:15.023 [2024-04-26 16:02:54.698260] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.281 malloc0 00:20:15.281 [2024-04-26 16:02:54.773120] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.281 [2024-04-26 16:02:54.773364] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.281 16:02:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.281 16:02:54 -- target/tls.sh@252 -- # bdevperf_pid=2481919 00:20:15.281 16:02:54 -- target/tls.sh@254 -- # waitforlisten 2481919 /var/tmp/bdevperf.sock 00:20:15.281 16:02:54 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:15.281 16:02:54 -- common/autotest_common.sh@817 -- # '[' -z 2481919 ']' 00:20:15.281 16:02:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.281 16:02:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.281 16:02:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.281 16:02:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.281 16:02:54 -- common/autotest_common.sh@10 -- # set +x 00:20:15.281 [2024-04-26 16:02:54.871322] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:15.281 [2024-04-26 16:02:54.871404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2481919 ] 00:20:15.281 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.539 [2024-04-26 16:02:54.978080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.539 [2024-04-26 16:02:55.202585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.104 16:02:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:16.104 16:02:55 -- common/autotest_common.sh@850 -- # return 0 00:20:16.104 16:02:55 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7lTupJrdXS 00:20:16.362 16:02:55 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:16.362 [2024-04-26 16:02:55.967274] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.620 nvme0n1 00:20:16.620 16:02:56 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.620 Running I/O for 1 seconds... 00:20:17.994 00:20:17.995 Latency(us) 00:20:17.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.995 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:17.995 Verification LBA range: start 0x0 length 0x2000 00:20:17.995 nvme0n1 : 1.07 1702.54 6.65 0.00 0.00 73337.36 8263.23 94827.74 00:20:17.995 =================================================================================================================== 00:20:17.995 Total : 1702.54 6.65 0.00 0.00 73337.36 8263.23 94827.74 00:20:17.995 0 00:20:17.995 16:02:57 -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:17.995 16:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.995 16:02:57 -- common/autotest_common.sh@10 -- # set +x 00:20:17.995 16:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.995 16:02:57 -- target/tls.sh@263 -- # tgtcfg='{ 00:20:17.995 "subsystems": [ 00:20:17.995 { 00:20:17.995 "subsystem": "keyring", 00:20:17.995 "config": [ 00:20:17.995 { 00:20:17.995 "method": "keyring_file_add_key", 00:20:17.995 "params": { 00:20:17.995 "name": "key0", 00:20:17.995 "path": "/tmp/tmp.7lTupJrdXS" 00:20:17.995 } 00:20:17.995 } 00:20:17.995 ] 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "subsystem": "iobuf", 00:20:17.995 "config": [ 00:20:17.995 { 00:20:17.995 "method": "iobuf_set_options", 00:20:17.995 "params": { 00:20:17.995 "small_pool_count": 8192, 00:20:17.995 "large_pool_count": 1024, 00:20:17.995 "small_bufsize": 8192, 00:20:17.995 "large_bufsize": 135168 00:20:17.995 } 00:20:17.995 } 00:20:17.995 ] 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "subsystem": "sock", 00:20:17.995 "config": [ 00:20:17.995 { 00:20:17.995 "method": "sock_impl_set_options", 00:20:17.995 "params": { 00:20:17.995 "impl_name": "posix", 00:20:17.995 "recv_buf_size": 2097152, 00:20:17.995 "send_buf_size": 2097152, 00:20:17.995 "enable_recv_pipe": true, 00:20:17.995 "enable_quickack": false, 00:20:17.995 "enable_placement_id": 0, 00:20:17.995 "enable_zerocopy_send_server": true, 00:20:17.995 "enable_zerocopy_send_client": false, 00:20:17.995 "zerocopy_threshold": 0, 00:20:17.995 "tls_version": 0, 00:20:17.995 "enable_ktls": false 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "sock_impl_set_options", 00:20:17.995 "params": { 00:20:17.995 "impl_name": "ssl", 00:20:17.995 "recv_buf_size": 4096, 00:20:17.995 "send_buf_size": 4096, 00:20:17.995 "enable_recv_pipe": true, 00:20:17.995 "enable_quickack": false, 00:20:17.995 "enable_placement_id": 0, 00:20:17.995 "enable_zerocopy_send_server": true, 00:20:17.995 "enable_zerocopy_send_client": false, 00:20:17.995 "zerocopy_threshold": 0, 00:20:17.995 "tls_version": 0, 00:20:17.995 "enable_ktls": false 00:20:17.995 } 00:20:17.995 } 00:20:17.995 ] 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "subsystem": "vmd", 00:20:17.995 "config": [] 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "subsystem": "accel", 00:20:17.995 "config": [ 00:20:17.995 { 00:20:17.995 "method": "accel_set_options", 00:20:17.995 "params": { 00:20:17.995 "small_cache_size": 128, 00:20:17.995 "large_cache_size": 16, 00:20:17.995 "task_count": 2048, 00:20:17.995 "sequence_count": 2048, 00:20:17.995 "buf_count": 2048 00:20:17.995 } 00:20:17.995 } 00:20:17.995 ] 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "subsystem": "bdev", 00:20:17.995 "config": [ 00:20:17.995 { 00:20:17.995 "method": "bdev_set_options", 00:20:17.995 "params": { 00:20:17.995 "bdev_io_pool_size": 65535, 00:20:17.995 "bdev_io_cache_size": 256, 00:20:17.995 "bdev_auto_examine": true, 00:20:17.995 "iobuf_small_cache_size": 128, 00:20:17.995 "iobuf_large_cache_size": 16 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "bdev_raid_set_options", 00:20:17.995 "params": { 00:20:17.995 "process_window_size_kb": 1024 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "bdev_iscsi_set_options", 00:20:17.995 "params": { 00:20:17.995 "timeout_sec": 30 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "bdev_nvme_set_options", 00:20:17.995 "params": { 00:20:17.995 "action_on_timeout": "none", 00:20:17.995 "timeout_us": 0, 00:20:17.995 "timeout_admin_us": 0, 00:20:17.995 "keep_alive_timeout_ms": 10000, 00:20:17.995 "arbitration_burst": 0, 00:20:17.995 "low_priority_weight": 0, 00:20:17.995 "medium_priority_weight": 0, 00:20:17.995 "high_priority_weight": 0, 00:20:17.995 "nvme_adminq_poll_period_us": 10000, 00:20:17.995 "nvme_ioq_poll_period_us": 0, 00:20:17.995 "io_queue_requests": 0, 00:20:17.995 "delay_cmd_submit": true, 00:20:17.995 "transport_retry_count": 4, 00:20:17.995 "bdev_retry_count": 3, 00:20:17.995 "transport_ack_timeout": 0, 00:20:17.995 "ctrlr_loss_timeout_sec": 0, 00:20:17.995 "reconnect_delay_sec": 0, 00:20:17.995 "fast_io_fail_timeout_sec": 0, 00:20:17.995 "disable_auto_failback": false, 00:20:17.995 "generate_uuids": false, 00:20:17.995 "transport_tos": 0, 00:20:17.995 "nvme_error_stat": false, 00:20:17.995 "rdma_srq_size": 0, 00:20:17.995 "io_path_stat": false, 00:20:17.995 "allow_accel_sequence": false, 00:20:17.995 "rdma_max_cq_size": 0, 00:20:17.995 "rdma_cm_event_timeout_ms": 0, 00:20:17.995 "dhchap_digests": [ 00:20:17.995 "sha256", 00:20:17.995 "sha384", 00:20:17.995 "sha512" 00:20:17.995 ], 00:20:17.995 "dhchap_dhgroups": [ 00:20:17.995 "null", 00:20:17.995 "ffdhe2048", 00:20:17.995 "ffdhe3072", 00:20:17.995 "ffdhe4096", 00:20:17.995 "ffdhe6144", 00:20:17.995 "ffdhe8192" 00:20:17.995 ] 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "bdev_nvme_set_hotplug", 00:20:17.995 "params": { 00:20:17.995 "period_us": 100000, 00:20:17.995 "enable": false 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "bdev_malloc_create", 00:20:17.995 "params": { 00:20:17.995 "name": "malloc0", 00:20:17.995 "num_blocks": 8192, 00:20:17.995 "block_size": 4096, 00:20:17.995 "physical_block_size": 4096, 00:20:17.995 "uuid": "d57db980-e064-44c8-a5ad-01be6b68642a", 00:20:17.995 "optimal_io_boundary": 0 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "bdev_wait_for_examine" 00:20:17.995 } 00:20:17.995 ] 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "subsystem": "nbd", 00:20:17.995 "config": [] 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "subsystem": "scheduler", 00:20:17.995 "config": [ 00:20:17.995 { 00:20:17.995 "method": "framework_set_scheduler", 00:20:17.995 "params": { 00:20:17.995 "name": "static" 00:20:17.995 } 00:20:17.995 } 00:20:17.995 ] 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "subsystem": "nvmf", 00:20:17.995 "config": [ 00:20:17.995 { 00:20:17.995 "method": "nvmf_set_config", 00:20:17.995 "params": { 00:20:17.995 "discovery_filter": "match_any", 00:20:17.995 "admin_cmd_passthru": { 00:20:17.995 "identify_ctrlr": false 00:20:17.995 } 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "nvmf_set_max_subsystems", 00:20:17.995 "params": { 00:20:17.995 "max_subsystems": 1024 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "nvmf_set_crdt", 00:20:17.995 "params": { 00:20:17.995 "crdt1": 0, 00:20:17.995 "crdt2": 0, 00:20:17.995 "crdt3": 0 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "nvmf_create_transport", 00:20:17.995 "params": { 00:20:17.995 "trtype": "TCP", 00:20:17.995 "max_queue_depth": 128, 00:20:17.995 "max_io_qpairs_per_ctrlr": 127, 00:20:17.995 "in_capsule_data_size": 4096, 00:20:17.995 "max_io_size": 131072, 00:20:17.995 "io_unit_size": 131072, 00:20:17.995 "max_aq_depth": 128, 00:20:17.995 "num_shared_buffers": 511, 00:20:17.995 "buf_cache_size": 4294967295, 00:20:17.995 "dif_insert_or_strip": false, 00:20:17.995 "zcopy": false, 00:20:17.995 "c2h_success": false, 00:20:17.995 "sock_priority": 0, 00:20:17.995 "abort_timeout_sec": 1, 00:20:17.995 "ack_timeout": 0, 00:20:17.995 "data_wr_pool_size": 0 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "nvmf_create_subsystem", 00:20:17.995 "params": { 00:20:17.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.995 "allow_any_host": false, 00:20:17.995 "serial_number": "00000000000000000000", 00:20:17.995 "model_number": "SPDK bdev Controller", 00:20:17.995 "max_namespaces": 32, 00:20:17.995 "min_cntlid": 1, 00:20:17.995 "max_cntlid": 65519, 00:20:17.995 "ana_reporting": false 00:20:17.995 } 00:20:17.995 }, 00:20:17.995 { 00:20:17.995 "method": "nvmf_subsystem_add_host", 00:20:17.995 "params": { 00:20:17.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.995 "host": "nqn.2016-06.io.spdk:host1", 00:20:17.995 "psk": "key0" 00:20:17.995 } 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "method": "nvmf_subsystem_add_ns", 00:20:17.996 "params": { 00:20:17.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.996 "namespace": { 00:20:17.996 "nsid": 1, 00:20:17.996 "bdev_name": "malloc0", 00:20:17.996 "nguid": "D57DB980E06444C8A5AD01BE6B68642A", 00:20:17.996 "uuid": "d57db980-e064-44c8-a5ad-01be6b68642a", 00:20:17.996 "no_auto_visible": false 00:20:17.996 } 00:20:17.996 } 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "method": "nvmf_subsystem_add_listener", 00:20:17.996 "params": { 00:20:17.996 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.996 "listen_address": { 00:20:17.996 "trtype": "TCP", 00:20:17.996 "adrfam": "IPv4", 00:20:17.996 "traddr": "10.0.0.2", 00:20:17.996 "trsvcid": "4420" 00:20:17.996 }, 00:20:17.996 "secure_channel": true 00:20:17.996 } 00:20:17.996 } 00:20:17.996 ] 00:20:17.996 } 00:20:17.996 ] 00:20:17.996 }' 00:20:17.996 16:02:57 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:17.996 16:02:57 -- target/tls.sh@264 -- # bperfcfg='{ 00:20:17.996 "subsystems": [ 00:20:17.996 { 00:20:17.996 "subsystem": "keyring", 00:20:17.996 "config": [ 00:20:17.996 { 00:20:17.996 "method": "keyring_file_add_key", 00:20:17.996 "params": { 00:20:17.996 "name": "key0", 00:20:17.996 "path": "/tmp/tmp.7lTupJrdXS" 00:20:17.996 } 00:20:17.996 } 00:20:17.996 ] 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "subsystem": "iobuf", 00:20:17.996 "config": [ 00:20:17.996 { 00:20:17.996 "method": "iobuf_set_options", 00:20:17.996 "params": { 00:20:17.996 "small_pool_count": 8192, 00:20:17.996 "large_pool_count": 1024, 00:20:17.996 "small_bufsize": 8192, 00:20:17.996 "large_bufsize": 135168 00:20:17.996 } 00:20:17.996 } 00:20:17.996 ] 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "subsystem": "sock", 00:20:17.996 "config": [ 00:20:17.996 { 00:20:17.996 "method": "sock_impl_set_options", 00:20:17.996 "params": { 00:20:17.996 "impl_name": "posix", 00:20:17.996 "recv_buf_size": 2097152, 00:20:17.996 "send_buf_size": 2097152, 00:20:17.996 "enable_recv_pipe": true, 00:20:17.996 "enable_quickack": false, 00:20:17.996 "enable_placement_id": 0, 00:20:17.996 "enable_zerocopy_send_server": true, 00:20:17.996 "enable_zerocopy_send_client": false, 00:20:17.996 "zerocopy_threshold": 0, 00:20:17.996 "tls_version": 0, 00:20:17.996 "enable_ktls": false 00:20:17.996 } 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "method": "sock_impl_set_options", 00:20:17.996 "params": { 00:20:17.996 "impl_name": "ssl", 00:20:17.996 "recv_buf_size": 4096, 00:20:17.996 "send_buf_size": 4096, 00:20:17.996 "enable_recv_pipe": true, 00:20:17.996 "enable_quickack": false, 00:20:17.996 "enable_placement_id": 0, 00:20:17.996 "enable_zerocopy_send_server": true, 00:20:17.996 "enable_zerocopy_send_client": false, 00:20:17.996 "zerocopy_threshold": 0, 00:20:17.996 "tls_version": 0, 00:20:17.996 "enable_ktls": false 00:20:17.996 } 00:20:17.996 } 00:20:17.996 ] 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "subsystem": "vmd", 00:20:17.996 "config": [] 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "subsystem": "accel", 00:20:17.996 "config": [ 00:20:17.996 { 00:20:17.996 "method": "accel_set_options", 00:20:17.996 "params": { 00:20:17.996 "small_cache_size": 128, 00:20:17.996 "large_cache_size": 16, 00:20:17.996 "task_count": 2048, 00:20:17.996 "sequence_count": 2048, 00:20:17.996 "buf_count": 2048 00:20:17.996 } 00:20:17.996 } 00:20:17.996 ] 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "subsystem": "bdev", 00:20:17.996 "config": [ 00:20:17.996 { 00:20:17.996 "method": "bdev_set_options", 00:20:17.996 "params": { 00:20:17.996 "bdev_io_pool_size": 65535, 00:20:17.996 "bdev_io_cache_size": 256, 00:20:17.996 "bdev_auto_examine": true, 00:20:17.996 "iobuf_small_cache_size": 128, 00:20:17.996 "iobuf_large_cache_size": 16 00:20:17.996 } 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "method": "bdev_raid_set_options", 00:20:17.996 "params": { 00:20:17.996 "process_window_size_kb": 1024 00:20:17.996 } 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "method": "bdev_iscsi_set_options", 00:20:17.996 "params": { 00:20:17.996 "timeout_sec": 30 00:20:17.996 } 00:20:17.996 }, 00:20:17.996 { 00:20:17.996 "method": "bdev_nvme_set_options", 00:20:17.996 "params": { 00:20:17.996 "action_on_timeout": "none", 00:20:17.996 "timeout_us": 0, 00:20:17.996 "timeout_admin_us": 0, 00:20:17.996 "keep_alive_timeout_ms": 10000, 00:20:17.996 "arbitration_burst": 0, 00:20:17.996 "low_priority_weight": 0, 00:20:17.996 "medium_priority_weight": 0, 00:20:17.996 "high_priority_weight": 0, 00:20:17.996 "nvme_adminq_poll_period_us": 10000, 00:20:17.996 "nvme_ioq_poll_period_us": 0, 00:20:17.996 "io_queue_requests": 512, 00:20:17.996 "delay_cmd_submit": true, 00:20:17.996 "transport_retry_count": 4, 00:20:17.996 "bdev_retry_count": 3, 00:20:17.996 "transport_ack_timeout": 0, 00:20:17.996 "ctrlr_loss_timeout_sec": 0, 00:20:17.996 "reconnect_delay_sec": 0, 00:20:17.996 "fast_io_fail_timeout_sec": 0, 00:20:17.996 "disable_auto_failback": false, 00:20:17.996 "generate_uuids": false, 00:20:17.996 "transport_tos": 0, 00:20:17.996 "nvme_error_stat": false, 00:20:17.996 "rdma_srq_size": 0, 00:20:17.997 "io_path_stat": false, 00:20:17.997 "allow_accel_sequence": false, 00:20:17.997 "rdma_max_cq_size": 0, 00:20:17.997 "rdma_cm_event_timeout_ms": 0, 00:20:17.997 "dhchap_digests": [ 00:20:17.997 "sha256", 00:20:17.997 "sha384", 00:20:17.997 "sha512" 00:20:17.997 ], 00:20:17.997 "dhchap_dhgroups": [ 00:20:17.997 "null", 00:20:17.997 "ffdhe2048", 00:20:17.997 "ffdhe3072", 00:20:17.997 "ffdhe4096", 00:20:17.997 "ffdhe6144", 00:20:17.997 "ffdhe8192" 00:20:17.997 ] 00:20:17.997 } 00:20:17.997 }, 00:20:17.997 { 00:20:17.997 "method": "bdev_nvme_attach_controller", 00:20:17.997 "params": { 00:20:17.997 "name": "nvme0", 00:20:17.997 "trtype": "TCP", 00:20:17.997 "adrfam": "IPv4", 00:20:17.997 "traddr": "10.0.0.2", 00:20:17.997 "trsvcid": "4420", 00:20:17.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.997 "prchk_reftag": false, 00:20:17.997 "prchk_guard": false, 00:20:17.997 "ctrlr_loss_timeout_sec": 0, 00:20:17.997 "reconnect_delay_sec": 0, 00:20:17.997 "fast_io_fail_timeout_sec": 0, 00:20:17.997 "psk": "key0", 00:20:17.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.997 "hdgst": false, 00:20:17.997 "ddgst": false 00:20:17.997 } 00:20:17.997 }, 00:20:17.997 { 00:20:17.997 "method": "bdev_nvme_set_hotplug", 00:20:17.997 "params": { 00:20:17.997 "period_us": 100000, 00:20:17.997 "enable": false 00:20:17.997 } 00:20:17.997 }, 00:20:17.997 { 00:20:17.997 "method": "bdev_enable_histogram", 00:20:17.997 "params": { 00:20:17.997 "name": "nvme0n1", 00:20:17.997 "enable": true 00:20:17.997 } 00:20:17.997 }, 00:20:17.997 { 00:20:17.997 "method": "bdev_wait_for_examine" 00:20:17.997 } 00:20:17.997 ] 00:20:17.997 }, 00:20:17.997 { 00:20:17.997 "subsystem": "nbd", 00:20:17.997 "config": [] 00:20:17.997 } 00:20:17.997 ] 00:20:17.997 }' 00:20:17.997 16:02:57 -- target/tls.sh@266 -- # killprocess 2481919 00:20:17.997 16:02:57 -- common/autotest_common.sh@936 -- # '[' -z 2481919 ']' 00:20:17.997 16:02:57 -- common/autotest_common.sh@940 -- # kill -0 2481919 00:20:17.997 16:02:57 -- common/autotest_common.sh@941 -- # uname 00:20:17.997 16:02:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.997 16:02:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2481919 00:20:17.997 16:02:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:17.997 16:02:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:17.997 16:02:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2481919' 00:20:17.997 killing process with pid 2481919 00:20:17.997 16:02:57 -- common/autotest_common.sh@955 -- # kill 2481919 00:20:17.997 Received shutdown signal, test time was about 1.000000 seconds 00:20:17.997 00:20:17.997 Latency(us) 00:20:17.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.997 =================================================================================================================== 00:20:17.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.997 16:02:57 -- common/autotest_common.sh@960 -- # wait 2481919 00:20:19.373 16:02:58 -- target/tls.sh@267 -- # killprocess 2481838 00:20:19.373 16:02:58 -- common/autotest_common.sh@936 -- # '[' -z 2481838 ']' 00:20:19.373 16:02:58 -- common/autotest_common.sh@940 -- # kill -0 2481838 00:20:19.373 16:02:58 -- common/autotest_common.sh@941 -- # uname 00:20:19.373 16:02:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:19.373 16:02:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2481838 00:20:19.373 16:02:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:19.373 16:02:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:19.373 16:02:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2481838' 00:20:19.373 killing process with pid 2481838 00:20:19.373 16:02:58 -- common/autotest_common.sh@955 -- # kill 2481838 00:20:19.373 16:02:58 -- common/autotest_common.sh@960 -- # wait 2481838 00:20:20.748 16:03:00 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:20.748 16:03:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:20.748 16:03:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:20.748 16:03:00 -- target/tls.sh@269 -- # echo '{ 00:20:20.748 "subsystems": [ 00:20:20.748 { 00:20:20.748 "subsystem": "keyring", 00:20:20.748 "config": [ 00:20:20.748 { 00:20:20.748 "method": "keyring_file_add_key", 00:20:20.748 "params": { 00:20:20.748 "name": "key0", 00:20:20.748 "path": "/tmp/tmp.7lTupJrdXS" 00:20:20.748 } 00:20:20.748 } 00:20:20.748 ] 00:20:20.748 }, 00:20:20.748 { 00:20:20.748 "subsystem": "iobuf", 00:20:20.748 "config": [ 00:20:20.748 { 00:20:20.748 "method": "iobuf_set_options", 00:20:20.748 "params": { 00:20:20.748 "small_pool_count": 8192, 00:20:20.748 "large_pool_count": 1024, 00:20:20.748 "small_bufsize": 8192, 00:20:20.748 "large_bufsize": 135168 00:20:20.748 } 00:20:20.748 } 00:20:20.748 ] 00:20:20.748 }, 00:20:20.748 { 00:20:20.748 "subsystem": "sock", 00:20:20.748 "config": [ 00:20:20.748 { 00:20:20.748 "method": "sock_impl_set_options", 00:20:20.748 "params": { 00:20:20.748 "impl_name": "posix", 00:20:20.748 "recv_buf_size": 2097152, 00:20:20.748 "send_buf_size": 2097152, 00:20:20.748 "enable_recv_pipe": true, 00:20:20.748 "enable_quickack": false, 00:20:20.748 "enable_placement_id": 0, 00:20:20.748 "enable_zerocopy_send_server": true, 00:20:20.748 "enable_zerocopy_send_client": false, 00:20:20.748 "zerocopy_threshold": 0, 00:20:20.748 "tls_version": 0, 00:20:20.748 "enable_ktls": false 00:20:20.748 } 00:20:20.748 }, 00:20:20.748 { 00:20:20.748 "method": "sock_impl_set_options", 00:20:20.748 "params": { 00:20:20.748 "impl_name": "ssl", 00:20:20.748 "recv_buf_size": 4096, 00:20:20.748 "send_buf_size": 4096, 00:20:20.748 "enable_recv_pipe": true, 00:20:20.748 "enable_quickack": false, 00:20:20.748 "enable_placement_id": 0, 00:20:20.748 "enable_zerocopy_send_server": true, 00:20:20.748 "enable_zerocopy_send_client": false, 00:20:20.748 "zerocopy_threshold": 0, 00:20:20.748 "tls_version": 0, 00:20:20.748 "enable_ktls": false 00:20:20.748 } 00:20:20.748 } 00:20:20.748 ] 00:20:20.748 }, 00:20:20.748 { 00:20:20.748 "subsystem": "vmd", 00:20:20.748 "config": [] 00:20:20.748 }, 00:20:20.748 { 00:20:20.748 "subsystem": "accel", 00:20:20.748 "config": [ 00:20:20.748 { 00:20:20.748 "method": "accel_set_options", 00:20:20.748 "params": { 00:20:20.748 "small_cache_size": 128, 00:20:20.748 "large_cache_size": 16, 00:20:20.748 "task_count": 2048, 00:20:20.748 "sequence_count": 2048, 00:20:20.748 "buf_count": 2048 00:20:20.748 } 00:20:20.748 } 00:20:20.748 ] 00:20:20.748 }, 00:20:20.748 { 00:20:20.748 "subsystem": "bdev", 00:20:20.748 "config": [ 00:20:20.748 { 00:20:20.748 "method": "bdev_set_options", 00:20:20.748 "params": { 00:20:20.748 "bdev_io_pool_size": 65535, 00:20:20.749 "bdev_io_cache_size": 256, 00:20:20.749 "bdev_auto_examine": true, 00:20:20.749 "iobuf_small_cache_size": 128, 00:20:20.749 "iobuf_large_cache_size": 16 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "bdev_raid_set_options", 00:20:20.749 "params": { 00:20:20.749 "process_window_size_kb": 1024 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "bdev_iscsi_set_options", 00:20:20.749 "params": { 00:20:20.749 "timeout_sec": 30 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "bdev_nvme_set_options", 00:20:20.749 "params": { 00:20:20.749 "action_on_timeout": "none", 00:20:20.749 "timeout_us": 0, 00:20:20.749 "timeout_admin_us": 0, 00:20:20.749 "keep_alive_timeout_ms": 10000, 00:20:20.749 "arbitration_burst": 0, 00:20:20.749 "low_priority_weight": 0, 00:20:20.749 "medium_priority_weight": 0, 00:20:20.749 "high_priority_weight": 0, 00:20:20.749 "nvme_adminq_poll_period_us": 10000, 00:20:20.749 "nvme_ioq_poll_period_us": 0, 00:20:20.749 "io_queue_requests": 0, 00:20:20.749 "delay_cmd_submit": true, 00:20:20.749 "transport_retry_count": 4, 00:20:20.749 "bdev_retry_count": 3, 00:20:20.749 "transport_ack_timeout": 0, 00:20:20.749 "ctrlr_loss_timeout_sec": 0, 00:20:20.749 "reconnect_delay_sec": 0, 00:20:20.749 "fast_io_fail_timeout_sec": 0, 00:20:20.749 "disable_auto_failback": false, 00:20:20.749 "generate_uuids": false, 00:20:20.749 "transport_tos": 0, 00:20:20.749 "nvme_error_stat": false, 00:20:20.749 "rdma_srq_size": 0, 00:20:20.749 "io_path_stat": false, 00:20:20.749 "allow_accel_sequence": false, 00:20:20.749 "rdma_max_cq_size": 0, 00:20:20.749 "rdma_cm_event_timeout_ms": 0, 00:20:20.749 "dhchap_digests": [ 00:20:20.749 "sha256", 00:20:20.749 "sha384", 00:20:20.749 "sha512" 00:20:20.749 ], 00:20:20.749 "dhchap_dhgroups": [ 00:20:20.749 "null", 00:20:20.749 "ffdhe2048", 00:20:20.749 "ffdhe3072", 00:20:20.749 "ffdhe4096", 00:20:20.749 "ffdhe6144", 00:20:20.749 "ffdhe8192" 00:20:20.749 ] 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "bdev_nvme_set_hotplug", 00:20:20.749 "params": { 00:20:20.749 "period_us": 100000, 00:20:20.749 "enable": false 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "bdev_malloc_create", 00:20:20.749 "params": { 00:20:20.749 "name": "malloc0", 00:20:20.749 "num_blocks": 8192, 00:20:20.749 "block_size": 4096, 00:20:20.749 "physical_block_size": 4096, 00:20:20.749 "uuid": "d57db980-e064-44c8-a5ad-01be6b68642a", 00:20:20.749 "optimal_io_boundary": 0 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "bdev_wait_for_examine" 00:20:20.749 } 00:20:20.749 ] 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "subsystem": "nbd", 00:20:20.749 "config": [] 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "subsystem": "scheduler", 00:20:20.749 "config": [ 00:20:20.749 { 00:20:20.749 "method": "framework_set_scheduler", 00:20:20.749 "params": { 00:20:20.749 "name": "static" 00:20:20.749 } 00:20:20.749 } 00:20:20.749 ] 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "subsystem": "nvmf", 00:20:20.749 "config": [ 00:20:20.749 { 00:20:20.749 "method": "nvmf_set_config", 00:20:20.749 "params": { 00:20:20.749 "discovery_filter": "match_any", 00:20:20.749 "admin_cmd_passthru": { 00:20:20.749 "identify_ctrlr": false 00:20:20.749 } 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "nvmf_set_max_subsystems", 00:20:20.749 "params": { 00:20:20.749 "max_subsystems": 1024 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "nvmf_set_crdt", 00:20:20.749 "params": { 00:20:20.749 "crdt1": 0, 00:20:20.749 "crdt2": 0, 00:20:20.749 "crdt3": 0 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "nvmf_create_transport", 00:20:20.749 "params": { 00:20:20.749 "trtype": "TCP", 00:20:20.749 "max_queue_depth": 128, 00:20:20.749 "max_io_qpairs_per_ctrlr": 127, 00:20:20.749 "in_capsule_data_size": 4096, 00:20:20.749 "max_io_size": 131072, 00:20:20.749 "io_unit_size": 131072, 00:20:20.749 "max_aq_depth": 128, 00:20:20.749 "num_shared_buffers": 511, 00:20:20.749 "buf_cache_size": 4294967295, 00:20:20.749 "dif_insert_or_strip": false, 00:20:20.749 "zcopy": false, 00:20:20.749 "c2h_success": false, 00:20:20.749 "sock_priority": 0, 00:20:20.749 "abort_timeout_sec": 1, 00:20:20.749 "ack_timeout": 0, 00:20:20.749 "data_wr_pool_size": 0 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "nvmf_create_subsystem", 00:20:20.749 "params": { 00:20:20.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.749 "allow_any_host": false, 00:20:20.749 "serial_number": "00000000000000000000", 00:20:20.749 "model_number": "SPDK bdev Controller", 00:20:20.749 "max_namespaces": 32, 00:20:20.749 "min_cntlid": 1, 00:20:20.749 "max_cntlid": 65519, 00:20:20.749 "ana_reporting": false 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "nvmf_subsystem_add_host", 00:20:20.749 "params": { 00:20:20.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.749 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.749 "psk": "key0" 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "nvmf_subsystem_add_ns", 00:20:20.749 "params": { 00:20:20.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.749 "namespace": { 00:20:20.749 "nsid": 1, 00:20:20.749 "bdev_name": "malloc0", 00:20:20.749 "nguid": "D57DB980E06444C8A5AD01BE6B68642A", 00:20:20.749 "uuid": "d57db980-e064-44c8-a5ad-01be6b68642a", 00:20:20.749 "no_auto_visible": false 00:20:20.749 } 00:20:20.749 } 00:20:20.749 }, 00:20:20.749 { 00:20:20.749 "method": "nvmf_subsystem_add_listener", 00:20:20.749 "params": { 00:20:20.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.749 "listen_address": { 00:20:20.749 "trtype": "TCP", 00:20:20.749 "adrfam": "IPv4", 00:20:20.749 "traddr": "10.0.0.2", 00:20:20.749 "trsvcid": "4420" 00:20:20.749 }, 00:20:20.749 "secure_channel": true 00:20:20.749 } 00:20:20.749 } 00:20:20.749 ] 00:20:20.749 } 00:20:20.749 ] 00:20:20.749 }' 00:20:20.749 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:20:20.749 16:03:00 -- nvmf/common.sh@470 -- # nvmfpid=2482863 00:20:20.749 16:03:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:20.749 16:03:00 -- nvmf/common.sh@471 -- # waitforlisten 2482863 00:20:20.749 16:03:00 -- common/autotest_common.sh@817 -- # '[' -z 2482863 ']' 00:20:20.749 16:03:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.749 16:03:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:20.749 16:03:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.749 16:03:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:20.749 16:03:00 -- common/autotest_common.sh@10 -- # set +x 00:20:20.749 [2024-04-26 16:03:00.144868] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:20.749 [2024-04-26 16:03:00.144957] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.749 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.749 [2024-04-26 16:03:00.254544] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.010 [2024-04-26 16:03:00.476333] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.010 [2024-04-26 16:03:00.476377] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.010 [2024-04-26 16:03:00.476390] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.010 [2024-04-26 16:03:00.476401] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.010 [2024-04-26 16:03:00.476411] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.010 [2024-04-26 16:03:00.476496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.664 [2024-04-26 16:03:01.040082] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.664 [2024-04-26 16:03:01.072128] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.664 [2024-04-26 16:03:01.072346] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.664 16:03:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:21.664 16:03:01 -- common/autotest_common.sh@850 -- # return 0 00:20:21.664 16:03:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:21.664 16:03:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:21.664 16:03:01 -- common/autotest_common.sh@10 -- # set +x 00:20:21.664 16:03:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.664 16:03:01 -- target/tls.sh@272 -- # bdevperf_pid=2483156 00:20:21.664 16:03:01 -- target/tls.sh@273 -- # waitforlisten 2483156 /var/tmp/bdevperf.sock 00:20:21.664 16:03:01 -- common/autotest_common.sh@817 -- # '[' -z 2483156 ']' 00:20:21.664 16:03:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.664 16:03:01 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:21.664 16:03:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:21.664 16:03:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.664 16:03:01 -- target/tls.sh@270 -- # echo '{ 00:20:21.664 "subsystems": [ 00:20:21.664 { 00:20:21.664 "subsystem": "keyring", 00:20:21.664 "config": [ 00:20:21.664 { 00:20:21.664 "method": "keyring_file_add_key", 00:20:21.664 "params": { 00:20:21.664 "name": "key0", 00:20:21.664 "path": "/tmp/tmp.7lTupJrdXS" 00:20:21.664 } 00:20:21.664 } 00:20:21.664 ] 00:20:21.664 }, 00:20:21.664 { 00:20:21.664 "subsystem": "iobuf", 00:20:21.664 "config": [ 00:20:21.664 { 00:20:21.664 "method": "iobuf_set_options", 00:20:21.664 "params": { 00:20:21.664 "small_pool_count": 8192, 00:20:21.664 "large_pool_count": 1024, 00:20:21.664 "small_bufsize": 8192, 00:20:21.664 "large_bufsize": 135168 00:20:21.664 } 00:20:21.664 } 00:20:21.664 ] 00:20:21.664 }, 00:20:21.664 { 00:20:21.664 "subsystem": "sock", 00:20:21.664 "config": [ 00:20:21.664 { 00:20:21.664 "method": "sock_impl_set_options", 00:20:21.664 "params": { 00:20:21.664 "impl_name": "posix", 00:20:21.664 "recv_buf_size": 2097152, 00:20:21.664 "send_buf_size": 2097152, 00:20:21.664 "enable_recv_pipe": true, 00:20:21.664 "enable_quickack": false, 00:20:21.664 "enable_placement_id": 0, 00:20:21.664 "enable_zerocopy_send_server": true, 00:20:21.664 "enable_zerocopy_send_client": false, 00:20:21.664 "zerocopy_threshold": 0, 00:20:21.664 "tls_version": 0, 00:20:21.664 "enable_ktls": false 00:20:21.664 } 00:20:21.664 }, 00:20:21.664 { 00:20:21.664 "method": "sock_impl_set_options", 00:20:21.664 "params": { 00:20:21.664 "impl_name": "ssl", 00:20:21.664 "recv_buf_size": 4096, 00:20:21.664 "send_buf_size": 4096, 00:20:21.664 "enable_recv_pipe": true, 00:20:21.664 "enable_quickack": false, 00:20:21.664 "enable_placement_id": 0, 00:20:21.664 "enable_zerocopy_send_server": true, 00:20:21.664 "enable_zerocopy_send_client": false, 00:20:21.664 "zerocopy_threshold": 0, 00:20:21.664 "tls_version": 0, 00:20:21.664 "enable_ktls": false 00:20:21.664 } 00:20:21.664 } 00:20:21.664 ] 00:20:21.664 }, 00:20:21.664 { 00:20:21.664 "subsystem": "vmd", 00:20:21.664 "config": [] 00:20:21.664 }, 00:20:21.664 { 00:20:21.664 "subsystem": "accel", 00:20:21.664 "config": [ 00:20:21.664 { 00:20:21.664 "method": "accel_set_options", 00:20:21.664 "params": { 00:20:21.664 "small_cache_size": 128, 00:20:21.664 "large_cache_size": 16, 00:20:21.664 "task_count": 2048, 00:20:21.664 "sequence_count": 2048, 00:20:21.664 "buf_count": 2048 00:20:21.664 } 00:20:21.664 } 00:20:21.664 ] 00:20:21.664 }, 00:20:21.664 { 00:20:21.664 "subsystem": "bdev", 00:20:21.664 "config": [ 00:20:21.664 { 00:20:21.664 "method": "bdev_set_options", 00:20:21.664 "params": { 00:20:21.664 "bdev_io_pool_size": 65535, 00:20:21.664 "bdev_io_cache_size": 256, 00:20:21.664 "bdev_auto_examine": true, 00:20:21.664 "iobuf_small_cache_size": 128, 00:20:21.664 "iobuf_large_cache_size": 16 00:20:21.664 } 00:20:21.664 }, 00:20:21.664 { 00:20:21.664 "method": "bdev_raid_set_options", 00:20:21.664 "params": { 00:20:21.664 "process_window_size_kb": 1024 00:20:21.664 } 00:20:21.664 }, 00:20:21.664 { 00:20:21.664 "method": "bdev_iscsi_set_options", 00:20:21.664 "params": { 00:20:21.664 "timeout_sec": 30 00:20:21.664 } 00:20:21.665 }, 00:20:21.665 { 00:20:21.665 "method": "bdev_nvme_set_options", 00:20:21.665 "params": { 00:20:21.665 "action_on_timeout": "none", 00:20:21.665 "timeout_us": 0, 00:20:21.665 "timeout_admin_us": 0, 00:20:21.665 "keep_alive_timeout_ms": 10000, 00:20:21.665 "arbitration_burst": 0, 00:20:21.665 "low_priority_weight": 0, 00:20:21.665 "medium_priority_weight": 0, 00:20:21.665 "high_priority_weight": 0, 00:20:21.665 "nvme_adminq_poll_period_us": 10000, 00:20:21.665 "nvme_ioq_poll_period_us": 0, 00:20:21.665 "io_queue_requests": 512, 00:20:21.665 "delay_cmd_submit": true, 00:20:21.665 "transport_retry_count": 4, 00:20:21.665 "bdev_retry_count": 3, 00:20:21.665 "transport_ack_timeout": 0, 00:20:21.665 "ctrlr_loss_timeout_sec": 0, 00:20:21.665 "reconnect_delay_sec": 0, 00:20:21.665 "fast_io_fail_timeout_sec": 0, 00:20:21.665 "disable_auto_failback": false, 00:20:21.665 "generate_uuids": false, 00:20:21.665 "transport_tos": 0, 00:20:21.665 "nvme_error_stat": false, 00:20:21.665 "rdma_srq_size": 0, 00:20:21.665 "io_path_stat": false, 00:20:21.665 "allow_accel_sequence": false, 00:20:21.665 "rdma_max_cq_size": 0, 00:20:21.665 "rdma_cm_event_timeout_ms": 0, 00:20:21.665 "dhchap_digests": [ 00:20:21.665 "sha256", 00:20:21.665 "sha384", 00:20:21.665 "sha512" 00:20:21.665 ], 00:20:21.665 "dhchap_dhgroups": [ 00:20:21.665 "null", 00:20:21.665 "ffdhe2048", 00:20:21.665 "ffdhe3072", 00:20:21.665 "ffdhe4096", 00:20:21.665 "ffdhe6144", 00:20:21.665 "ffdhe8192" 00:20:21.665 ] 00:20:21.665 } 00:20:21.665 }, 00:20:21.665 { 00:20:21.665 "method": "bdev_nvme_attach_controller", 00:20:21.665 "params": { 00:20:21.665 "name": "nvme0", 00:20:21.665 "trtype": "TCP", 00:20:21.665 "adrfam": "IPv4", 00:20:21.665 "traddr": "10.0.0.2", 00:20:21.665 "trsvcid": "4420", 00:20:21.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.665 "prchk_reftag": false, 00:20:21.665 "prchk_guard": false, 00:20:21.665 "ctrlr_loss_timeout_sec": 0, 00:20:21.665 "reconnect_delay_sec": 0, 00:20:21.665 "fast_io_fail_timeout_sec": 0, 00:20:21.665 "psk": "key0", 00:20:21.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.665 "hdgst": false, 00:20:21.665 "ddgst": false 00:20:21.665 } 00:20:21.665 }, 00:20:21.665 { 00:20:21.665 "method": "bdev_nvme_set_hotplug", 00:20:21.665 "params": { 00:20:21.665 "period_us": 100000, 00:20:21.665 "enable": false 00:20:21.665 } 00:20:21.665 }, 00:20:21.665 { 00:20:21.665 "method": "bdev_enable_histogram", 00:20:21.665 "params": { 00:20:21.665 "name": "nvme0n1", 00:20:21.665 "enable": true 00:20:21.665 } 00:20:21.665 }, 00:20:21.665 { 00:20:21.665 "method": "bdev_wait_for_examine" 00:20:21.665 } 00:20:21.665 ] 00:20:21.665 }, 00:20:21.665 { 00:20:21.665 "subsystem": "nbd", 00:20:21.665 "config": [] 00:20:21.665 } 00:20:21.665 ] 00:20:21.665 }' 00:20:21.665 16:03:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:21.665 16:03:01 -- common/autotest_common.sh@10 -- # set +x 00:20:21.665 [2024-04-26 16:03:01.207437] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:21.665 [2024-04-26 16:03:01.207583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2483156 ] 00:20:21.665 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.665 [2024-04-26 16:03:01.314853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.923 [2024-04-26 16:03:01.549996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.489 [2024-04-26 16:03:01.996407] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.489 16:03:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:22.489 16:03:02 -- common/autotest_common.sh@850 -- # return 0 00:20:22.489 16:03:02 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:22.489 16:03:02 -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:22.747 16:03:02 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.747 16:03:02 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.747 Running I/O for 1 seconds... 00:20:24.124 00:20:24.124 Latency(us) 00:20:24.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.124 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:24.124 Verification LBA range: start 0x0 length 0x2000 00:20:24.124 nvme0n1 : 1.06 1576.98 6.16 0.00 0.00 79401.48 8719.14 116255.17 00:20:24.124 =================================================================================================================== 00:20:24.124 Total : 1576.98 6.16 0.00 0.00 79401.48 8719.14 116255.17 00:20:24.124 0 00:20:24.124 16:03:03 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:24.124 16:03:03 -- target/tls.sh@279 -- # cleanup 00:20:24.124 16:03:03 -- target/tls.sh@15 -- # process_shm --id 0 00:20:24.124 16:03:03 -- common/autotest_common.sh@794 -- # type=--id 00:20:24.124 16:03:03 -- common/autotest_common.sh@795 -- # id=0 00:20:24.124 16:03:03 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:24.124 16:03:03 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:24.124 16:03:03 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:24.124 16:03:03 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:24.124 16:03:03 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:24.124 16:03:03 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:24.124 nvmf_trace.0 00:20:24.124 16:03:03 -- common/autotest_common.sh@809 -- # return 0 00:20:24.124 16:03:03 -- target/tls.sh@16 -- # killprocess 2483156 00:20:24.124 16:03:03 -- common/autotest_common.sh@936 -- # '[' -z 2483156 ']' 00:20:24.124 16:03:03 -- common/autotest_common.sh@940 -- # kill -0 2483156 00:20:24.124 16:03:03 -- common/autotest_common.sh@941 -- # uname 00:20:24.124 16:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:24.124 16:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2483156 00:20:24.124 16:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:24.124 16:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:24.124 16:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2483156' 00:20:24.124 killing process with pid 2483156 00:20:24.124 16:03:03 -- common/autotest_common.sh@955 -- # kill 2483156 00:20:24.124 Received shutdown signal, test time was about 1.000000 seconds 00:20:24.124 00:20:24.124 Latency(us) 00:20:24.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.124 =================================================================================================================== 00:20:24.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.124 16:03:03 -- common/autotest_common.sh@960 -- # wait 2483156 00:20:25.058 16:03:04 -- target/tls.sh@17 -- # nvmftestfini 00:20:25.058 16:03:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:25.058 16:03:04 -- nvmf/common.sh@117 -- # sync 00:20:25.058 16:03:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.058 16:03:04 -- nvmf/common.sh@120 -- # set +e 00:20:25.058 16:03:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.058 16:03:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.058 rmmod nvme_tcp 00:20:25.058 rmmod nvme_fabrics 00:20:25.058 rmmod nvme_keyring 00:20:25.058 16:03:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.058 16:03:04 -- nvmf/common.sh@124 -- # set -e 00:20:25.058 16:03:04 -- nvmf/common.sh@125 -- # return 0 00:20:25.058 16:03:04 -- nvmf/common.sh@478 -- # '[' -n 2482863 ']' 00:20:25.058 16:03:04 -- nvmf/common.sh@479 -- # killprocess 2482863 00:20:25.058 16:03:04 -- common/autotest_common.sh@936 -- # '[' -z 2482863 ']' 00:20:25.058 16:03:04 -- common/autotest_common.sh@940 -- # kill -0 2482863 00:20:25.058 16:03:04 -- common/autotest_common.sh@941 -- # uname 00:20:25.058 16:03:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.058 16:03:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2482863 00:20:25.058 16:03:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:25.058 16:03:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:25.058 16:03:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2482863' 00:20:25.058 killing process with pid 2482863 00:20:25.058 16:03:04 -- common/autotest_common.sh@955 -- # kill 2482863 00:20:25.058 16:03:04 -- common/autotest_common.sh@960 -- # wait 2482863 00:20:26.434 16:03:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:26.435 16:03:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:26.435 16:03:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:26.435 16:03:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.435 16:03:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.435 16:03:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.435 16:03:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.435 16:03:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.970 16:03:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:28.970 16:03:08 -- target/tls.sh@18 -- # rm -f /tmp/tmp.aqs3MaGWiS /tmp/tmp.Hwunwy2kmR /tmp/tmp.7lTupJrdXS 00:20:28.970 00:20:28.970 real 1m45.610s 00:20:28.970 user 2m43.242s 00:20:28.970 sys 0m28.090s 00:20:28.970 16:03:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:28.970 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:20:28.970 ************************************ 00:20:28.970 END TEST nvmf_tls 00:20:28.970 ************************************ 00:20:28.970 16:03:08 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.970 16:03:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:28.970 16:03:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:28.970 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:20:28.970 ************************************ 00:20:28.970 START TEST nvmf_fips 00:20:28.970 ************************************ 00:20:28.970 16:03:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.970 * Looking for test storage... 00:20:28.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:28.970 16:03:08 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.970 16:03:08 -- nvmf/common.sh@7 -- # uname -s 00:20:28.970 16:03:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.970 16:03:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.970 16:03:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.970 16:03:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.970 16:03:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.970 16:03:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.970 16:03:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.970 16:03:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.970 16:03:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.970 16:03:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.970 16:03:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.970 16:03:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.970 16:03:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.970 16:03:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.970 16:03:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.970 16:03:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.970 16:03:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.970 16:03:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.970 16:03:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.970 16:03:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.970 16:03:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.970 16:03:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.970 16:03:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.970 16:03:08 -- paths/export.sh@5 -- # export PATH 00:20:28.970 16:03:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.970 16:03:08 -- nvmf/common.sh@47 -- # : 0 00:20:28.970 16:03:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.970 16:03:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.970 16:03:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.970 16:03:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.970 16:03:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.970 16:03:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.970 16:03:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.970 16:03:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.970 16:03:08 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:28.970 16:03:08 -- fips/fips.sh@89 -- # check_openssl_version 00:20:28.971 16:03:08 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:28.971 16:03:08 -- fips/fips.sh@85 -- # openssl version 00:20:28.971 16:03:08 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:28.971 16:03:08 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:28.971 16:03:08 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:28.971 16:03:08 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:28.971 16:03:08 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:28.971 16:03:08 -- scripts/common.sh@333 -- # IFS=.-: 00:20:28.971 16:03:08 -- scripts/common.sh@333 -- # read -ra ver1 00:20:28.971 16:03:08 -- scripts/common.sh@334 -- # IFS=.-: 00:20:28.971 16:03:08 -- scripts/common.sh@334 -- # read -ra ver2 00:20:28.971 16:03:08 -- scripts/common.sh@335 -- # local 'op=>=' 00:20:28.971 16:03:08 -- scripts/common.sh@337 -- # ver1_l=3 00:20:28.971 16:03:08 -- scripts/common.sh@338 -- # ver2_l=3 00:20:28.971 16:03:08 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:28.971 16:03:08 -- scripts/common.sh@341 -- # case "$op" in 00:20:28.971 16:03:08 -- scripts/common.sh@345 -- # : 1 00:20:28.971 16:03:08 -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:28.971 16:03:08 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.971 16:03:08 -- scripts/common.sh@362 -- # decimal 3 00:20:28.971 16:03:08 -- scripts/common.sh@350 -- # local d=3 00:20:28.971 16:03:08 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.971 16:03:08 -- scripts/common.sh@352 -- # echo 3 00:20:28.971 16:03:08 -- scripts/common.sh@362 -- # ver1[v]=3 00:20:28.971 16:03:08 -- scripts/common.sh@363 -- # decimal 3 00:20:28.971 16:03:08 -- scripts/common.sh@350 -- # local d=3 00:20:28.971 16:03:08 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.971 16:03:08 -- scripts/common.sh@352 -- # echo 3 00:20:28.971 16:03:08 -- scripts/common.sh@363 -- # ver2[v]=3 00:20:28.971 16:03:08 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.971 16:03:08 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:28.971 16:03:08 -- scripts/common.sh@361 -- # (( v++ )) 00:20:28.971 16:03:08 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.971 16:03:08 -- scripts/common.sh@362 -- # decimal 0 00:20:28.971 16:03:08 -- scripts/common.sh@350 -- # local d=0 00:20:28.971 16:03:08 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.971 16:03:08 -- scripts/common.sh@352 -- # echo 0 00:20:28.971 16:03:08 -- scripts/common.sh@362 -- # ver1[v]=0 00:20:28.971 16:03:08 -- scripts/common.sh@363 -- # decimal 0 00:20:28.971 16:03:08 -- scripts/common.sh@350 -- # local d=0 00:20:28.971 16:03:08 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.971 16:03:08 -- scripts/common.sh@352 -- # echo 0 00:20:28.971 16:03:08 -- scripts/common.sh@363 -- # ver2[v]=0 00:20:28.971 16:03:08 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.971 16:03:08 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:28.971 16:03:08 -- scripts/common.sh@361 -- # (( v++ )) 00:20:28.971 16:03:08 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.971 16:03:08 -- scripts/common.sh@362 -- # decimal 9 00:20:28.971 16:03:08 -- scripts/common.sh@350 -- # local d=9 00:20:28.971 16:03:08 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:28.971 16:03:08 -- scripts/common.sh@352 -- # echo 9 00:20:28.971 16:03:08 -- scripts/common.sh@362 -- # ver1[v]=9 00:20:28.971 16:03:08 -- scripts/common.sh@363 -- # decimal 0 00:20:28.971 16:03:08 -- scripts/common.sh@350 -- # local d=0 00:20:28.971 16:03:08 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.971 16:03:08 -- scripts/common.sh@352 -- # echo 0 00:20:28.971 16:03:08 -- scripts/common.sh@363 -- # ver2[v]=0 00:20:28.971 16:03:08 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:28.971 16:03:08 -- scripts/common.sh@364 -- # return 0 00:20:28.971 16:03:08 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:28.971 16:03:08 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:28.971 16:03:08 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:28.971 16:03:08 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:28.971 16:03:08 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:28.971 16:03:08 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:28.971 16:03:08 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:28.971 16:03:08 -- fips/fips.sh@113 -- # build_openssl_config 00:20:28.971 16:03:08 -- fips/fips.sh@37 -- # cat 00:20:28.971 16:03:08 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:28.971 16:03:08 -- fips/fips.sh@58 -- # cat - 00:20:28.971 16:03:08 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:28.971 16:03:08 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:28.971 16:03:08 -- fips/fips.sh@116 -- # mapfile -t providers 00:20:28.971 16:03:08 -- fips/fips.sh@116 -- # openssl list -providers 00:20:28.971 16:03:08 -- fips/fips.sh@116 -- # grep name 00:20:28.971 16:03:08 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:28.971 16:03:08 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:28.971 16:03:08 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:28.971 16:03:08 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:28.971 16:03:08 -- common/autotest_common.sh@638 -- # local es=0 00:20:28.971 16:03:08 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:28.971 16:03:08 -- common/autotest_common.sh@626 -- # local arg=openssl 00:20:28.971 16:03:08 -- fips/fips.sh@127 -- # : 00:20:28.971 16:03:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:28.971 16:03:08 -- common/autotest_common.sh@630 -- # type -t openssl 00:20:28.971 16:03:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:28.971 16:03:08 -- common/autotest_common.sh@632 -- # type -P openssl 00:20:28.971 16:03:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:28.971 16:03:08 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:20:28.971 16:03:08 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:20:28.971 16:03:08 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:20:28.971 Error setting digest 00:20:28.971 000287D3FE7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:28.971 000287D3FE7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:28.971 16:03:08 -- common/autotest_common.sh@641 -- # es=1 00:20:28.971 16:03:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:28.971 16:03:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:28.971 16:03:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:28.971 16:03:08 -- fips/fips.sh@130 -- # nvmftestinit 00:20:28.971 16:03:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:28.971 16:03:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.971 16:03:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:28.971 16:03:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:28.971 16:03:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:28.971 16:03:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.971 16:03:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.971 16:03:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.971 16:03:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:28.971 16:03:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:28.971 16:03:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.971 16:03:08 -- common/autotest_common.sh@10 -- # set +x 00:20:34.246 16:03:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.246 16:03:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.246 16:03:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.246 16:03:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.246 16:03:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.246 16:03:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.246 16:03:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.246 16:03:13 -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.246 16:03:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.246 16:03:13 -- nvmf/common.sh@296 -- # e810=() 00:20:34.246 16:03:13 -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.246 16:03:13 -- nvmf/common.sh@297 -- # x722=() 00:20:34.246 16:03:13 -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.246 16:03:13 -- nvmf/common.sh@298 -- # mlx=() 00:20:34.246 16:03:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.246 16:03:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.246 16:03:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.246 16:03:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.246 16:03:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.246 16:03:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.246 16:03:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:34.246 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:34.246 16:03:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.246 16:03:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:34.246 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:34.246 16:03:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.246 16:03:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.246 16:03:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.246 16:03:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.246 16:03:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.246 16:03:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:34.246 Found net devices under 0000:86:00.0: cvl_0_0 00:20:34.246 16:03:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.246 16:03:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.246 16:03:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.246 16:03:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.246 16:03:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.246 16:03:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:34.246 Found net devices under 0000:86:00.1: cvl_0_1 00:20:34.246 16:03:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.246 16:03:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:34.246 16:03:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:34.246 16:03:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:34.246 16:03:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:34.246 16:03:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.246 16:03:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.246 16:03:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.246 16:03:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.246 16:03:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.246 16:03:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.246 16:03:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.246 16:03:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.246 16:03:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.246 16:03:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.246 16:03:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.246 16:03:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.246 16:03:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.247 16:03:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.247 16:03:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.247 16:03:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.247 16:03:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.247 16:03:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.247 16:03:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.247 16:03:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:34.247 00:20:34.247 --- 10.0.0.2 ping statistics --- 00:20:34.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.247 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:34.247 16:03:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:20:34.247 00:20:34.247 --- 10.0.0.1 ping statistics --- 00:20:34.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.247 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:20:34.247 16:03:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.247 16:03:13 -- nvmf/common.sh@411 -- # return 0 00:20:34.247 16:03:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.247 16:03:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.247 16:03:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.247 16:03:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.247 16:03:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.247 16:03:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.247 16:03:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.247 16:03:13 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:34.247 16:03:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:34.247 16:03:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.247 16:03:13 -- common/autotest_common.sh@10 -- # set +x 00:20:34.506 16:03:13 -- nvmf/common.sh@470 -- # nvmfpid=2487868 00:20:34.506 16:03:13 -- nvmf/common.sh@471 -- # waitforlisten 2487868 00:20:34.506 16:03:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:34.506 16:03:13 -- common/autotest_common.sh@817 -- # '[' -z 2487868 ']' 00:20:34.506 16:03:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.506 16:03:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.506 16:03:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.506 16:03:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.506 16:03:13 -- common/autotest_common.sh@10 -- # set +x 00:20:34.506 [2024-04-26 16:03:14.039809] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:34.506 [2024-04-26 16:03:14.039892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.506 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.506 [2024-04-26 16:03:14.145763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.765 [2024-04-26 16:03:14.359765] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.765 [2024-04-26 16:03:14.359811] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.765 [2024-04-26 16:03:14.359821] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.765 [2024-04-26 16:03:14.359833] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.765 [2024-04-26 16:03:14.359841] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.765 [2024-04-26 16:03:14.359873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.333 16:03:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.333 16:03:14 -- common/autotest_common.sh@850 -- # return 0 00:20:35.333 16:03:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:35.333 16:03:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.333 16:03:14 -- common/autotest_common.sh@10 -- # set +x 00:20:35.333 16:03:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.333 16:03:14 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:35.333 16:03:14 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:35.333 16:03:14 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:35.333 16:03:14 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:35.333 16:03:14 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:35.333 16:03:14 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:35.333 16:03:14 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:35.333 16:03:14 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.333 [2024-04-26 16:03:14.987652] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.333 [2024-04-26 16:03:15.003641] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.333 [2024-04-26 16:03:15.003850] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.591 [2024-04-26 16:03:15.080793] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:35.591 malloc0 00:20:35.591 16:03:15 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.591 16:03:15 -- fips/fips.sh@147 -- # bdevperf_pid=2488117 00:20:35.591 16:03:15 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.592 16:03:15 -- fips/fips.sh@148 -- # waitforlisten 2488117 /var/tmp/bdevperf.sock 00:20:35.592 16:03:15 -- common/autotest_common.sh@817 -- # '[' -z 2488117 ']' 00:20:35.592 16:03:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.592 16:03:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.592 16:03:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.592 16:03:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.592 16:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:35.592 [2024-04-26 16:03:15.208706] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:35.592 [2024-04-26 16:03:15.208804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488117 ] 00:20:35.592 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.850 [2024-04-26 16:03:15.307777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.108 [2024-04-26 16:03:15.533952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.367 16:03:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.367 16:03:15 -- common/autotest_common.sh@850 -- # return 0 00:20:36.367 16:03:15 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:36.624 [2024-04-26 16:03:16.124601] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.624 [2024-04-26 16:03:16.124711] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:36.624 TLSTESTn1 00:20:36.624 16:03:16 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:36.882 Running I/O for 10 seconds... 00:20:46.851 00:20:46.851 Latency(us) 00:20:46.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.852 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:46.852 Verification LBA range: start 0x0 length 0x2000 00:20:46.852 TLSTESTn1 : 10.06 1761.13 6.88 0.00 0.00 72480.55 8947.09 117622.87 00:20:46.852 =================================================================================================================== 00:20:46.852 Total : 1761.13 6.88 0.00 0.00 72480.55 8947.09 117622.87 00:20:46.852 0 00:20:46.852 16:03:26 -- fips/fips.sh@1 -- # cleanup 00:20:46.852 16:03:26 -- fips/fips.sh@15 -- # process_shm --id 0 00:20:46.852 16:03:26 -- common/autotest_common.sh@794 -- # type=--id 00:20:46.852 16:03:26 -- common/autotest_common.sh@795 -- # id=0 00:20:46.852 16:03:26 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:46.852 16:03:26 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:46.852 16:03:26 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:46.852 16:03:26 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:46.852 16:03:26 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:46.852 16:03:26 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:46.852 nvmf_trace.0 00:20:46.852 16:03:26 -- common/autotest_common.sh@809 -- # return 0 00:20:46.852 16:03:26 -- fips/fips.sh@16 -- # killprocess 2488117 00:20:46.852 16:03:26 -- common/autotest_common.sh@936 -- # '[' -z 2488117 ']' 00:20:46.852 16:03:26 -- common/autotest_common.sh@940 -- # kill -0 2488117 00:20:46.852 16:03:26 -- common/autotest_common.sh@941 -- # uname 00:20:46.852 16:03:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:46.852 16:03:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2488117 00:20:47.110 16:03:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:47.110 16:03:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:47.110 16:03:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2488117' 00:20:47.110 killing process with pid 2488117 00:20:47.110 16:03:26 -- common/autotest_common.sh@955 -- # kill 2488117 00:20:47.110 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.110 00:20:47.110 Latency(us) 00:20:47.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.110 =================================================================================================================== 00:20:47.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.110 [2024-04-26 16:03:26.542718] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:47.110 16:03:26 -- common/autotest_common.sh@960 -- # wait 2488117 00:20:48.046 16:03:27 -- fips/fips.sh@17 -- # nvmftestfini 00:20:48.046 16:03:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:48.046 16:03:27 -- nvmf/common.sh@117 -- # sync 00:20:48.046 16:03:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.046 16:03:27 -- nvmf/common.sh@120 -- # set +e 00:20:48.046 16:03:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.046 16:03:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.046 rmmod nvme_tcp 00:20:48.046 rmmod nvme_fabrics 00:20:48.046 rmmod nvme_keyring 00:20:48.046 16:03:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.046 16:03:27 -- nvmf/common.sh@124 -- # set -e 00:20:48.046 16:03:27 -- nvmf/common.sh@125 -- # return 0 00:20:48.046 16:03:27 -- nvmf/common.sh@478 -- # '[' -n 2487868 ']' 00:20:48.046 16:03:27 -- nvmf/common.sh@479 -- # killprocess 2487868 00:20:48.046 16:03:27 -- common/autotest_common.sh@936 -- # '[' -z 2487868 ']' 00:20:48.046 16:03:27 -- common/autotest_common.sh@940 -- # kill -0 2487868 00:20:48.046 16:03:27 -- common/autotest_common.sh@941 -- # uname 00:20:48.046 16:03:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.046 16:03:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2487868 00:20:48.046 16:03:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:48.046 16:03:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:48.046 16:03:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2487868' 00:20:48.046 killing process with pid 2487868 00:20:48.046 16:03:27 -- common/autotest_common.sh@955 -- # kill 2487868 00:20:48.046 [2024-04-26 16:03:27.680590] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:48.046 16:03:27 -- common/autotest_common.sh@960 -- # wait 2487868 00:20:49.420 16:03:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:49.420 16:03:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:49.420 16:03:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:49.420 16:03:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.420 16:03:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.420 16:03:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.420 16:03:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.420 16:03:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.953 16:03:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.953 16:03:31 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:51.953 00:20:51.953 real 0m22.804s 00:20:51.953 user 0m25.996s 00:20:51.953 sys 0m8.501s 00:20:51.953 16:03:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:51.953 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.953 ************************************ 00:20:51.953 END TEST nvmf_fips 00:20:51.953 ************************************ 00:20:51.953 16:03:31 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:20:51.953 16:03:31 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:20:51.953 16:03:31 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:20:51.953 16:03:31 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:20:51.953 16:03:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.953 16:03:31 -- common/autotest_common.sh@10 -- # set +x 00:20:57.222 16:03:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:57.222 16:03:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:57.222 16:03:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:57.222 16:03:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:57.222 16:03:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:57.222 16:03:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:57.222 16:03:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:57.222 16:03:36 -- nvmf/common.sh@295 -- # net_devs=() 00:20:57.222 16:03:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:57.222 16:03:36 -- nvmf/common.sh@296 -- # e810=() 00:20:57.222 16:03:36 -- nvmf/common.sh@296 -- # local -ga e810 00:20:57.222 16:03:36 -- nvmf/common.sh@297 -- # x722=() 00:20:57.222 16:03:36 -- nvmf/common.sh@297 -- # local -ga x722 00:20:57.222 16:03:36 -- nvmf/common.sh@298 -- # mlx=() 00:20:57.222 16:03:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:57.222 16:03:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.222 16:03:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:57.222 16:03:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:57.222 16:03:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:57.222 16:03:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.222 16:03:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:57.222 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:57.222 16:03:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.222 16:03:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:57.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:57.222 16:03:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:57.222 16:03:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:57.222 16:03:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.222 16:03:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.222 16:03:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:57.222 16:03:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.222 16:03:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:57.222 Found net devices under 0000:86:00.0: cvl_0_0 00:20:57.222 16:03:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.222 16:03:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.222 16:03:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.222 16:03:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:57.222 16:03:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.222 16:03:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:57.222 Found net devices under 0000:86:00.1: cvl_0_1 00:20:57.222 16:03:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.222 16:03:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:57.222 16:03:36 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.222 16:03:36 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:20:57.222 16:03:36 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:57.222 16:03:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:57.222 16:03:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:57.222 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:20:57.222 ************************************ 00:20:57.222 START TEST nvmf_perf_adq 00:20:57.222 ************************************ 00:20:57.223 16:03:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:57.223 * Looking for test storage... 00:20:57.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:57.223 16:03:36 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.223 16:03:36 -- nvmf/common.sh@7 -- # uname -s 00:20:57.223 16:03:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.223 16:03:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.223 16:03:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.223 16:03:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.223 16:03:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.223 16:03:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.223 16:03:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.223 16:03:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.223 16:03:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.223 16:03:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.223 16:03:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:57.223 16:03:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:57.223 16:03:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.223 16:03:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.223 16:03:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.223 16:03:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.223 16:03:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.223 16:03:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.223 16:03:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.223 16:03:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.223 16:03:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.223 16:03:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.223 16:03:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.223 16:03:36 -- paths/export.sh@5 -- # export PATH 00:20:57.223 16:03:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.223 16:03:36 -- nvmf/common.sh@47 -- # : 0 00:20:57.223 16:03:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.223 16:03:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.223 16:03:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.223 16:03:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.223 16:03:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.223 16:03:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.223 16:03:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.223 16:03:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.223 16:03:36 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:57.223 16:03:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.223 16:03:36 -- common/autotest_common.sh@10 -- # set +x 00:21:02.490 16:03:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:02.490 16:03:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.490 16:03:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.490 16:03:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.490 16:03:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.490 16:03:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.490 16:03:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.490 16:03:41 -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.490 16:03:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.490 16:03:41 -- nvmf/common.sh@296 -- # e810=() 00:21:02.490 16:03:41 -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.490 16:03:41 -- nvmf/common.sh@297 -- # x722=() 00:21:02.490 16:03:41 -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.490 16:03:41 -- nvmf/common.sh@298 -- # mlx=() 00:21:02.490 16:03:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.491 16:03:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.491 16:03:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.491 16:03:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.491 16:03:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.491 16:03:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.491 16:03:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:02.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:02.491 16:03:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.491 16:03:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:02.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:02.491 16:03:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.491 16:03:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.491 16:03:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.491 16:03:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.491 16:03:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:02.491 16:03:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.491 16:03:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:02.491 Found net devices under 0000:86:00.0: cvl_0_0 00:21:02.491 16:03:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.491 16:03:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.491 16:03:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.491 16:03:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:02.491 16:03:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.491 16:03:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:02.491 Found net devices under 0000:86:00.1: cvl_0_1 00:21:02.491 16:03:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.491 16:03:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:02.491 16:03:41 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.491 16:03:41 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:02.491 16:03:41 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:02.491 16:03:41 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:21:02.491 16:03:41 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:03.079 16:03:42 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:04.980 16:03:44 -- target/perf_adq.sh@54 -- # sleep 5 00:21:10.254 16:03:49 -- target/perf_adq.sh@67 -- # nvmftestinit 00:21:10.254 16:03:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:10.254 16:03:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.254 16:03:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:10.254 16:03:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:10.254 16:03:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:10.254 16:03:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.254 16:03:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.254 16:03:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.254 16:03:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:10.254 16:03:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.254 16:03:49 -- common/autotest_common.sh@10 -- # set +x 00:21:10.254 16:03:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:10.254 16:03:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.254 16:03:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.254 16:03:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.254 16:03:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.254 16:03:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.254 16:03:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.254 16:03:49 -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.254 16:03:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.254 16:03:49 -- nvmf/common.sh@296 -- # e810=() 00:21:10.254 16:03:49 -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.254 16:03:49 -- nvmf/common.sh@297 -- # x722=() 00:21:10.254 16:03:49 -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.254 16:03:49 -- nvmf/common.sh@298 -- # mlx=() 00:21:10.254 16:03:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.254 16:03:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.254 16:03:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.254 16:03:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:10.254 16:03:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.254 16:03:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.254 16:03:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.254 16:03:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.254 16:03:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.254 16:03:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.254 16:03:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:10.254 16:03:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.254 16:03:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.254 16:03:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:10.254 16:03:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.254 16:03:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.254 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.254 16:03:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.254 16:03:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.254 16:03:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.254 16:03:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:10.254 16:03:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.254 16:03:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.254 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.254 16:03:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.254 16:03:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:10.254 16:03:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:10.255 16:03:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:10.255 16:03:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:10.255 16:03:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:10.255 16:03:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.255 16:03:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.255 16:03:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.255 16:03:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:10.255 16:03:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.255 16:03:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.255 16:03:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:10.255 16:03:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.255 16:03:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.255 16:03:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:10.255 16:03:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:10.255 16:03:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.255 16:03:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.255 16:03:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.255 16:03:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.255 16:03:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:10.255 16:03:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.255 16:03:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.255 16:03:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.255 16:03:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:10.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:21:10.255 00:21:10.255 --- 10.0.0.2 ping statistics --- 00:21:10.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.255 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:10.255 16:03:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:21:10.255 00:21:10.255 --- 10.0.0.1 ping statistics --- 00:21:10.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.255 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:21:10.255 16:03:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.255 16:03:49 -- nvmf/common.sh@411 -- # return 0 00:21:10.255 16:03:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:10.255 16:03:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.255 16:03:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:10.255 16:03:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:10.255 16:03:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.255 16:03:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:10.255 16:03:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:10.255 16:03:49 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:10.255 16:03:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:10.255 16:03:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:10.255 16:03:49 -- common/autotest_common.sh@10 -- # set +x 00:21:10.255 16:03:49 -- nvmf/common.sh@470 -- # nvmfpid=2498154 00:21:10.255 16:03:49 -- nvmf/common.sh@471 -- # waitforlisten 2498154 00:21:10.255 16:03:49 -- common/autotest_common.sh@817 -- # '[' -z 2498154 ']' 00:21:10.255 16:03:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.255 16:03:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:10.255 16:03:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.255 16:03:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:10.255 16:03:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:10.255 16:03:49 -- common/autotest_common.sh@10 -- # set +x 00:21:10.513 [2024-04-26 16:03:49.940809] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:10.513 [2024-04-26 16:03:49.940902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.513 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.513 [2024-04-26 16:03:50.052796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.773 [2024-04-26 16:03:50.286988] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.773 [2024-04-26 16:03:50.287031] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.773 [2024-04-26 16:03:50.287042] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.773 [2024-04-26 16:03:50.287053] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.773 [2024-04-26 16:03:50.287061] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.773 [2024-04-26 16:03:50.287138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.773 [2024-04-26 16:03:50.287213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.773 [2024-04-26 16:03:50.287271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.773 [2024-04-26 16:03:50.287280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.339 16:03:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:11.339 16:03:50 -- common/autotest_common.sh@850 -- # return 0 00:21:11.339 16:03:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:11.339 16:03:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:11.339 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:21:11.339 16:03:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.339 16:03:50 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:21:11.339 16:03:50 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:11.339 16:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.339 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:21:11.339 16:03:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.339 16:03:50 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:11.339 16:03:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.339 16:03:50 -- common/autotest_common.sh@10 -- # set +x 00:21:11.597 16:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.597 16:03:51 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:11.597 16:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.597 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:11.597 [2024-04-26 16:03:51.198106] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.597 16:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.597 16:03:51 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:11.597 16:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.597 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:11.857 Malloc1 00:21:11.857 16:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.857 16:03:51 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.857 16:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.857 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:11.857 16:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.857 16:03:51 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:11.857 16:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.857 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:11.857 16:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.857 16:03:51 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.857 16:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.857 16:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:11.857 [2024-04-26 16:03:51.325546] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.857 16:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.857 16:03:51 -- target/perf_adq.sh@73 -- # perfpid=2498433 00:21:11.857 16:03:51 -- target/perf_adq.sh@74 -- # sleep 2 00:21:11.857 16:03:51 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:11.857 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.825 16:03:53 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:21:13.825 16:03:53 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:13.825 16:03:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.825 16:03:53 -- target/perf_adq.sh@76 -- # wc -l 00:21:13.825 16:03:53 -- common/autotest_common.sh@10 -- # set +x 00:21:13.825 16:03:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.825 16:03:53 -- target/perf_adq.sh@76 -- # count=4 00:21:13.825 16:03:53 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:21:13.825 16:03:53 -- target/perf_adq.sh@81 -- # wait 2498433 00:21:21.925 Initializing NVMe Controllers 00:21:21.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:21.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:21.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:21.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:21.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:21.925 Initialization complete. Launching workers. 00:21:21.925 ======================================================== 00:21:21.925 Latency(us) 00:21:21.925 Device Information : IOPS MiB/s Average min max 00:21:21.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8972.50 35.05 7132.60 3112.98 10210.63 00:21:21.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9146.10 35.73 6997.30 4082.35 13753.87 00:21:21.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9038.90 35.31 7079.96 2728.19 12917.48 00:21:21.926 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9040.30 35.31 7079.62 3950.68 11835.92 00:21:21.926 ======================================================== 00:21:21.926 Total : 36197.80 141.40 7072.04 2728.19 13753.87 00:21:21.926 00:21:21.926 16:04:01 -- target/perf_adq.sh@82 -- # nvmftestfini 00:21:21.926 16:04:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:21.926 16:04:01 -- nvmf/common.sh@117 -- # sync 00:21:21.926 16:04:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:21.926 16:04:01 -- nvmf/common.sh@120 -- # set +e 00:21:21.926 16:04:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:21.926 16:04:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:21.926 rmmod nvme_tcp 00:21:21.926 rmmod nvme_fabrics 00:21:21.926 rmmod nvme_keyring 00:21:21.926 16:04:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:21.926 16:04:01 -- nvmf/common.sh@124 -- # set -e 00:21:21.926 16:04:01 -- nvmf/common.sh@125 -- # return 0 00:21:21.926 16:04:01 -- nvmf/common.sh@478 -- # '[' -n 2498154 ']' 00:21:21.926 16:04:01 -- nvmf/common.sh@479 -- # killprocess 2498154 00:21:21.926 16:04:01 -- common/autotest_common.sh@936 -- # '[' -z 2498154 ']' 00:21:21.926 16:04:01 -- common/autotest_common.sh@940 -- # kill -0 2498154 00:21:21.926 16:04:01 -- common/autotest_common.sh@941 -- # uname 00:21:21.926 16:04:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.926 16:04:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2498154 00:21:22.184 16:04:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:22.184 16:04:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:22.184 16:04:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2498154' 00:21:22.184 killing process with pid 2498154 00:21:22.184 16:04:01 -- common/autotest_common.sh@955 -- # kill 2498154 00:21:22.184 16:04:01 -- common/autotest_common.sh@960 -- # wait 2498154 00:21:23.558 16:04:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:23.558 16:04:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:23.558 16:04:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:23.558 16:04:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.558 16:04:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.558 16:04:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.558 16:04:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.558 16:04:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.091 16:04:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.091 16:04:05 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:21:26.091 16:04:05 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:27.027 16:04:06 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:28.935 16:04:08 -- target/perf_adq.sh@54 -- # sleep 5 00:21:34.212 16:04:13 -- target/perf_adq.sh@87 -- # nvmftestinit 00:21:34.212 16:04:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:34.212 16:04:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.212 16:04:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:34.212 16:04:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:34.212 16:04:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:34.212 16:04:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.212 16:04:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.212 16:04:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.212 16:04:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:34.212 16:04:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:34.212 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:21:34.212 16:04:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:34.212 16:04:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.212 16:04:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.212 16:04:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.212 16:04:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.212 16:04:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.212 16:04:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.212 16:04:13 -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.212 16:04:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.212 16:04:13 -- nvmf/common.sh@296 -- # e810=() 00:21:34.212 16:04:13 -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.212 16:04:13 -- nvmf/common.sh@297 -- # x722=() 00:21:34.212 16:04:13 -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.212 16:04:13 -- nvmf/common.sh@298 -- # mlx=() 00:21:34.212 16:04:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.212 16:04:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.212 16:04:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.212 16:04:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.212 16:04:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.212 16:04:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.212 16:04:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:34.212 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:34.212 16:04:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.212 16:04:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:34.212 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:34.212 16:04:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.212 16:04:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.212 16:04:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.212 16:04:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:34.212 16:04:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.212 16:04:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:34.212 Found net devices under 0000:86:00.0: cvl_0_0 00:21:34.212 16:04:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.212 16:04:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.212 16:04:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.212 16:04:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:34.212 16:04:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.212 16:04:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:34.212 Found net devices under 0000:86:00.1: cvl_0_1 00:21:34.212 16:04:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.212 16:04:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:34.212 16:04:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:34.212 16:04:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:34.212 16:04:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:34.212 16:04:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.212 16:04:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.212 16:04:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.212 16:04:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.212 16:04:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.212 16:04:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.212 16:04:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.212 16:04:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.212 16:04:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.212 16:04:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.213 16:04:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.213 16:04:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.213 16:04:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.213 16:04:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.213 16:04:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.213 16:04:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.213 16:04:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.213 16:04:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.213 16:04:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.213 16:04:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:21:34.213 00:21:34.213 --- 10.0.0.2 ping statistics --- 00:21:34.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.213 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:21:34.213 16:04:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:21:34.213 00:21:34.213 --- 10.0.0.1 ping statistics --- 00:21:34.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.213 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:21:34.213 16:04:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.213 16:04:13 -- nvmf/common.sh@411 -- # return 0 00:21:34.213 16:04:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:34.213 16:04:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.213 16:04:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:34.213 16:04:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:34.213 16:04:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.213 16:04:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:34.213 16:04:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:34.213 16:04:13 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:21:34.213 16:04:13 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:34.213 16:04:13 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:34.213 16:04:13 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:34.213 net.core.busy_poll = 1 00:21:34.213 16:04:13 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:34.213 net.core.busy_read = 1 00:21:34.213 16:04:13 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:34.213 16:04:13 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:34.213 16:04:13 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:34.213 16:04:13 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:34.213 16:04:13 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:34.213 16:04:13 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:34.213 16:04:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:34.213 16:04:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:34.213 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:21:34.213 16:04:13 -- nvmf/common.sh@470 -- # nvmfpid=2502386 00:21:34.213 16:04:13 -- nvmf/common.sh@471 -- # waitforlisten 2502386 00:21:34.213 16:04:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:34.213 16:04:13 -- common/autotest_common.sh@817 -- # '[' -z 2502386 ']' 00:21:34.213 16:04:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.213 16:04:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:34.213 16:04:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.213 16:04:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:34.213 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:21:34.473 [2024-04-26 16:04:13.964185] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:34.473 [2024-04-26 16:04:13.964271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.473 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.473 [2024-04-26 16:04:14.078212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.732 [2024-04-26 16:04:14.306629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.732 [2024-04-26 16:04:14.306685] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.732 [2024-04-26 16:04:14.306695] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.732 [2024-04-26 16:04:14.306705] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.732 [2024-04-26 16:04:14.306712] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.732 [2024-04-26 16:04:14.306825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.732 [2024-04-26 16:04:14.306900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.732 [2024-04-26 16:04:14.306960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.732 [2024-04-26 16:04:14.306969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.301 16:04:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:35.301 16:04:14 -- common/autotest_common.sh@850 -- # return 0 00:21:35.301 16:04:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:35.301 16:04:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:35.301 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:21:35.301 16:04:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.301 16:04:14 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:21:35.301 16:04:14 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:35.301 16:04:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.301 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:21:35.301 16:04:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.301 16:04:14 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:35.301 16:04:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.301 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:21:35.562 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.562 16:04:15 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:35.562 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.562 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:35.562 [2024-04-26 16:04:15.199030] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.562 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.562 16:04:15 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:35.562 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.562 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:35.827 Malloc1 00:21:35.827 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.827 16:04:15 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.827 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.827 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:35.827 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.827 16:04:15 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:35.827 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.827 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:35.827 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.827 16:04:15 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.827 16:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.827 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:35.827 [2024-04-26 16:04:15.324068] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.827 16:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.827 16:04:15 -- target/perf_adq.sh@94 -- # perfpid=2502674 00:21:35.827 16:04:15 -- target/perf_adq.sh@95 -- # sleep 2 00:21:35.827 16:04:15 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:35.827 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.735 16:04:17 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:37.736 16:04:17 -- target/perf_adq.sh@97 -- # wc -l 00:21:37.736 16:04:17 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:21:37.736 16:04:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.736 16:04:17 -- common/autotest_common.sh@10 -- # set +x 00:21:37.736 16:04:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.736 16:04:17 -- target/perf_adq.sh@97 -- # count=2 00:21:37.736 16:04:17 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:21:37.736 16:04:17 -- target/perf_adq.sh@103 -- # wait 2502674 00:21:47.724 Initializing NVMe Controllers 00:21:47.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:47.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:47.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:47.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:47.724 Initialization complete. Launching workers. 00:21:47.724 ======================================================== 00:21:47.724 Latency(us) 00:21:47.724 Device Information : IOPS MiB/s Average min max 00:21:47.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4820.70 18.83 13280.46 1886.88 57343.70 00:21:47.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4992.20 19.50 12855.27 2151.10 61264.20 00:21:47.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10448.60 40.81 6125.51 2037.59 46801.59 00:21:47.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4622.00 18.05 13871.91 1929.41 59204.59 00:21:47.724 ======================================================== 00:21:47.724 Total : 24883.50 97.20 10300.65 1886.88 61264.20 00:21:47.724 00:21:47.724 16:04:25 -- target/perf_adq.sh@104 -- # nvmftestfini 00:21:47.724 16:04:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:47.724 16:04:25 -- nvmf/common.sh@117 -- # sync 00:21:47.724 16:04:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.724 16:04:25 -- nvmf/common.sh@120 -- # set +e 00:21:47.724 16:04:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.724 16:04:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.724 rmmod nvme_tcp 00:21:47.724 rmmod nvme_fabrics 00:21:47.724 rmmod nvme_keyring 00:21:47.724 16:04:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.724 16:04:25 -- nvmf/common.sh@124 -- # set -e 00:21:47.724 16:04:25 -- nvmf/common.sh@125 -- # return 0 00:21:47.724 16:04:25 -- nvmf/common.sh@478 -- # '[' -n 2502386 ']' 00:21:47.724 16:04:25 -- nvmf/common.sh@479 -- # killprocess 2502386 00:21:47.724 16:04:25 -- common/autotest_common.sh@936 -- # '[' -z 2502386 ']' 00:21:47.724 16:04:25 -- common/autotest_common.sh@940 -- # kill -0 2502386 00:21:47.724 16:04:25 -- common/autotest_common.sh@941 -- # uname 00:21:47.724 16:04:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:47.724 16:04:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2502386 00:21:47.724 16:04:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:47.724 16:04:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:47.724 16:04:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2502386' 00:21:47.724 killing process with pid 2502386 00:21:47.724 16:04:25 -- common/autotest_common.sh@955 -- # kill 2502386 00:21:47.724 16:04:25 -- common/autotest_common.sh@960 -- # wait 2502386 00:21:47.724 16:04:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:47.724 16:04:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:47.724 16:04:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:47.724 16:04:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.724 16:04:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.724 16:04:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.724 16:04:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.724 16:04:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.632 16:04:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:49.632 16:04:29 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:21:49.632 00:21:49.632 real 0m52.795s 00:21:49.632 user 2m57.664s 00:21:49.632 sys 0m9.924s 00:21:49.632 16:04:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:49.632 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:21:49.632 ************************************ 00:21:49.632 END TEST nvmf_perf_adq 00:21:49.632 ************************************ 00:21:49.892 16:04:29 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:49.892 16:04:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:49.892 16:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:49.892 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:21:49.892 ************************************ 00:21:49.892 START TEST nvmf_shutdown 00:21:49.892 ************************************ 00:21:49.892 16:04:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:49.892 * Looking for test storage... 00:21:49.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:49.892 16:04:29 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.892 16:04:29 -- nvmf/common.sh@7 -- # uname -s 00:21:49.892 16:04:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.892 16:04:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.892 16:04:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.892 16:04:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.892 16:04:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.892 16:04:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.892 16:04:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.892 16:04:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.892 16:04:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.892 16:04:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.892 16:04:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:49.892 16:04:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:49.892 16:04:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.892 16:04:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.892 16:04:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.892 16:04:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.892 16:04:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.892 16:04:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.892 16:04:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.892 16:04:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.893 16:04:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.893 16:04:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.893 16:04:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.893 16:04:29 -- paths/export.sh@5 -- # export PATH 00:21:49.893 16:04:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.893 16:04:29 -- nvmf/common.sh@47 -- # : 0 00:21:49.893 16:04:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.893 16:04:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.893 16:04:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.893 16:04:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.893 16:04:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.893 16:04:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.893 16:04:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.893 16:04:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.893 16:04:29 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:49.893 16:04:29 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:49.893 16:04:29 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:49.893 16:04:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:49.893 16:04:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:49.893 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 ************************************ 00:21:50.153 START TEST nvmf_shutdown_tc1 00:21:50.153 ************************************ 00:21:50.153 16:04:29 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:21:50.153 16:04:29 -- target/shutdown.sh@74 -- # starttarget 00:21:50.153 16:04:29 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:50.153 16:04:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:50.153 16:04:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.153 16:04:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:50.153 16:04:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:50.153 16:04:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:50.153 16:04:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.153 16:04:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.153 16:04:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.153 16:04:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:50.153 16:04:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:50.153 16:04:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.153 16:04:29 -- common/autotest_common.sh@10 -- # set +x 00:21:55.432 16:04:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:55.432 16:04:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.432 16:04:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.432 16:04:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.432 16:04:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.432 16:04:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.432 16:04:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.432 16:04:34 -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.432 16:04:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.432 16:04:34 -- nvmf/common.sh@296 -- # e810=() 00:21:55.432 16:04:34 -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.432 16:04:34 -- nvmf/common.sh@297 -- # x722=() 00:21:55.432 16:04:34 -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.432 16:04:34 -- nvmf/common.sh@298 -- # mlx=() 00:21:55.432 16:04:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.432 16:04:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.432 16:04:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.432 16:04:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.432 16:04:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.432 16:04:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.432 16:04:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:55.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:55.432 16:04:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.432 16:04:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:55.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:55.432 16:04:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.432 16:04:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.432 16:04:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.432 16:04:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:55.432 16:04:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.432 16:04:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:55.432 Found net devices under 0000:86:00.0: cvl_0_0 00:21:55.432 16:04:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.432 16:04:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.432 16:04:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.432 16:04:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:55.432 16:04:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.432 16:04:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:55.432 Found net devices under 0000:86:00.1: cvl_0_1 00:21:55.432 16:04:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.432 16:04:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:55.432 16:04:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:55.432 16:04:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:55.432 16:04:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:55.432 16:04:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.432 16:04:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.432 16:04:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.432 16:04:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.433 16:04:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.433 16:04:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.433 16:04:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.433 16:04:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.433 16:04:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.433 16:04:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.433 16:04:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.433 16:04:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.433 16:04:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.433 16:04:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.433 16:04:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.433 16:04:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.433 16:04:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.433 16:04:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.433 16:04:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.433 16:04:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:21:55.433 00:21:55.433 --- 10.0.0.2 ping statistics --- 00:21:55.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.433 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:55.433 16:04:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:21:55.433 00:21:55.433 --- 10.0.0.1 ping statistics --- 00:21:55.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.433 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:21:55.433 16:04:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.433 16:04:34 -- nvmf/common.sh@411 -- # return 0 00:21:55.433 16:04:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:55.433 16:04:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.433 16:04:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:55.433 16:04:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:55.433 16:04:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.433 16:04:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:55.433 16:04:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:55.433 16:04:34 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:55.433 16:04:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:55.433 16:04:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:55.433 16:04:34 -- common/autotest_common.sh@10 -- # set +x 00:21:55.433 16:04:34 -- nvmf/common.sh@470 -- # nvmfpid=2508014 00:21:55.433 16:04:34 -- nvmf/common.sh@471 -- # waitforlisten 2508014 00:21:55.433 16:04:34 -- common/autotest_common.sh@817 -- # '[' -z 2508014 ']' 00:21:55.433 16:04:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.433 16:04:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:55.433 16:04:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.433 16:04:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:55.433 16:04:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:55.433 16:04:34 -- common/autotest_common.sh@10 -- # set +x 00:21:55.433 [2024-04-26 16:04:34.925044] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:55.433 [2024-04-26 16:04:34.925125] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.433 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.433 [2024-04-26 16:04:35.034621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.693 [2024-04-26 16:04:35.264626] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.693 [2024-04-26 16:04:35.264672] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.693 [2024-04-26 16:04:35.264682] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.693 [2024-04-26 16:04:35.264693] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.693 [2024-04-26 16:04:35.264701] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.693 [2024-04-26 16:04:35.264834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.693 [2024-04-26 16:04:35.264914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.693 [2024-04-26 16:04:35.264954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.693 [2024-04-26 16:04:35.264968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:56.262 16:04:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:56.262 16:04:35 -- common/autotest_common.sh@850 -- # return 0 00:21:56.262 16:04:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:56.262 16:04:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:56.262 16:04:35 -- common/autotest_common.sh@10 -- # set +x 00:21:56.262 16:04:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.262 16:04:35 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.262 16:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.262 16:04:35 -- common/autotest_common.sh@10 -- # set +x 00:21:56.262 [2024-04-26 16:04:35.741907] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.262 16:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:56.262 16:04:35 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:56.262 16:04:35 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:56.262 16:04:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:56.262 16:04:35 -- common/autotest_common.sh@10 -- # set +x 00:21:56.262 16:04:35 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:56.262 16:04:35 -- target/shutdown.sh@28 -- # cat 00:21:56.262 16:04:35 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:56.262 16:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:56.262 16:04:35 -- common/autotest_common.sh@10 -- # set +x 00:21:56.262 Malloc1 00:21:56.262 [2024-04-26 16:04:35.915163] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.522 Malloc2 00:21:56.522 Malloc3 00:21:56.780 Malloc4 00:21:56.780 Malloc5 00:21:56.780 Malloc6 00:21:57.040 Malloc7 00:21:57.040 Malloc8 00:21:57.299 Malloc9 00:21:57.299 Malloc10 00:21:57.299 16:04:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.299 16:04:36 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:57.299 16:04:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:57.299 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.559 16:04:36 -- target/shutdown.sh@78 -- # perfpid=2508514 00:21:57.559 16:04:36 -- target/shutdown.sh@79 -- # waitforlisten 2508514 /var/tmp/bdevperf.sock 00:21:57.559 16:04:36 -- common/autotest_common.sh@817 -- # '[' -z 2508514 ']' 00:21:57.559 16:04:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.559 16:04:36 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:57.559 16:04:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:57.559 16:04:36 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:57.559 16:04:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.559 16:04:36 -- nvmf/common.sh@521 -- # config=() 00:21:57.559 16:04:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:57.559 16:04:36 -- nvmf/common.sh@521 -- # local subsystem config 00:21:57.559 16:04:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.559 16:04:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.559 16:04:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.559 { 00:21:57.559 "params": { 00:21:57.559 "name": "Nvme$subsystem", 00:21:57.559 "trtype": "$TEST_TRANSPORT", 00:21:57.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.559 "adrfam": "ipv4", 00:21:57.559 "trsvcid": "$NVMF_PORT", 00:21:57.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.559 "hdgst": ${hdgst:-false}, 00:21:57.559 "ddgst": ${ddgst:-false} 00:21:57.559 }, 00:21:57.559 "method": "bdev_nvme_attach_controller" 00:21:57.559 } 00:21:57.559 EOF 00:21:57.559 )") 00:21:57.559 16:04:36 -- nvmf/common.sh@543 -- # cat 00:21:57.559 16:04:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.559 16:04:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.559 { 00:21:57.559 "params": { 00:21:57.559 "name": "Nvme$subsystem", 00:21:57.559 "trtype": "$TEST_TRANSPORT", 00:21:57.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.559 "adrfam": "ipv4", 00:21:57.559 "trsvcid": "$NVMF_PORT", 00:21:57.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.559 "hdgst": ${hdgst:-false}, 00:21:57.559 "ddgst": ${ddgst:-false} 00:21:57.559 }, 00:21:57.559 "method": "bdev_nvme_attach_controller" 00:21:57.559 } 00:21:57.559 EOF 00:21:57.559 )") 00:21:57.559 16:04:36 -- nvmf/common.sh@543 -- # cat 00:21:57.559 16:04:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.559 16:04:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.559 { 00:21:57.559 "params": { 00:21:57.559 "name": "Nvme$subsystem", 00:21:57.559 "trtype": "$TEST_TRANSPORT", 00:21:57.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.559 "adrfam": "ipv4", 00:21:57.559 "trsvcid": "$NVMF_PORT", 00:21:57.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.559 "hdgst": ${hdgst:-false}, 00:21:57.559 "ddgst": ${ddgst:-false} 00:21:57.559 }, 00:21:57.559 "method": "bdev_nvme_attach_controller" 00:21:57.559 } 00:21:57.559 EOF 00:21:57.559 )") 00:21:57.559 16:04:36 -- nvmf/common.sh@543 -- # cat 00:21:57.559 16:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.559 { 00:21:57.559 "params": { 00:21:57.559 "name": "Nvme$subsystem", 00:21:57.559 "trtype": "$TEST_TRANSPORT", 00:21:57.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.559 "adrfam": "ipv4", 00:21:57.559 "trsvcid": "$NVMF_PORT", 00:21:57.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.559 "hdgst": ${hdgst:-false}, 00:21:57.559 "ddgst": ${ddgst:-false} 00:21:57.559 }, 00:21:57.559 "method": "bdev_nvme_attach_controller" 00:21:57.559 } 00:21:57.559 EOF 00:21:57.559 )") 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # cat 00:21:57.559 16:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.559 { 00:21:57.559 "params": { 00:21:57.559 "name": "Nvme$subsystem", 00:21:57.559 "trtype": "$TEST_TRANSPORT", 00:21:57.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.559 "adrfam": "ipv4", 00:21:57.559 "trsvcid": "$NVMF_PORT", 00:21:57.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.559 "hdgst": ${hdgst:-false}, 00:21:57.559 "ddgst": ${ddgst:-false} 00:21:57.559 }, 00:21:57.559 "method": "bdev_nvme_attach_controller" 00:21:57.559 } 00:21:57.559 EOF 00:21:57.559 )") 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # cat 00:21:57.559 16:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.559 { 00:21:57.559 "params": { 00:21:57.559 "name": "Nvme$subsystem", 00:21:57.559 "trtype": "$TEST_TRANSPORT", 00:21:57.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.559 "adrfam": "ipv4", 00:21:57.559 "trsvcid": "$NVMF_PORT", 00:21:57.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.559 "hdgst": ${hdgst:-false}, 00:21:57.559 "ddgst": ${ddgst:-false} 00:21:57.559 }, 00:21:57.559 "method": "bdev_nvme_attach_controller" 00:21:57.559 } 00:21:57.559 EOF 00:21:57.559 )") 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # cat 00:21:57.559 16:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.559 { 00:21:57.559 "params": { 00:21:57.559 "name": "Nvme$subsystem", 00:21:57.559 "trtype": "$TEST_TRANSPORT", 00:21:57.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.559 "adrfam": "ipv4", 00:21:57.559 "trsvcid": "$NVMF_PORT", 00:21:57.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.559 "hdgst": ${hdgst:-false}, 00:21:57.559 "ddgst": ${ddgst:-false} 00:21:57.559 }, 00:21:57.559 "method": "bdev_nvme_attach_controller" 00:21:57.559 } 00:21:57.559 EOF 00:21:57.559 )") 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # cat 00:21:57.559 16:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.559 16:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.559 { 00:21:57.559 "params": { 00:21:57.559 "name": "Nvme$subsystem", 00:21:57.559 "trtype": "$TEST_TRANSPORT", 00:21:57.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.559 "adrfam": "ipv4", 00:21:57.559 "trsvcid": "$NVMF_PORT", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.560 "hdgst": ${hdgst:-false}, 00:21:57.560 "ddgst": ${ddgst:-false} 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 } 00:21:57.560 EOF 00:21:57.560 )") 00:21:57.560 16:04:37 -- nvmf/common.sh@543 -- # cat 00:21:57.560 16:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.560 16:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.560 { 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme$subsystem", 00:21:57.560 "trtype": "$TEST_TRANSPORT", 00:21:57.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "$NVMF_PORT", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.560 "hdgst": ${hdgst:-false}, 00:21:57.560 "ddgst": ${ddgst:-false} 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 } 00:21:57.560 EOF 00:21:57.560 )") 00:21:57.560 16:04:37 -- nvmf/common.sh@543 -- # cat 00:21:57.560 16:04:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.560 16:04:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.560 { 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme$subsystem", 00:21:57.560 "trtype": "$TEST_TRANSPORT", 00:21:57.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "$NVMF_PORT", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.560 "hdgst": ${hdgst:-false}, 00:21:57.560 "ddgst": ${ddgst:-false} 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 } 00:21:57.560 EOF 00:21:57.560 )") 00:21:57.560 16:04:37 -- nvmf/common.sh@543 -- # cat 00:21:57.560 16:04:37 -- nvmf/common.sh@545 -- # jq . 00:21:57.560 16:04:37 -- nvmf/common.sh@546 -- # IFS=, 00:21:57.560 [2024-04-26 16:04:37.054038] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:57.560 [2024-04-26 16:04:37.054129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:57.560 16:04:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme1", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme2", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme3", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme4", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme5", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme6", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme7", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme8", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme9", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 },{ 00:21:57.560 "params": { 00:21:57.560 "name": "Nvme10", 00:21:57.560 "trtype": "tcp", 00:21:57.560 "traddr": "10.0.0.2", 00:21:57.560 "adrfam": "ipv4", 00:21:57.560 "trsvcid": "4420", 00:21:57.560 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:57.560 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:57.560 "hdgst": false, 00:21:57.560 "ddgst": false 00:21:57.560 }, 00:21:57.560 "method": "bdev_nvme_attach_controller" 00:21:57.560 }' 00:21:57.560 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.560 [2024-04-26 16:04:37.161578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.820 [2024-04-26 16:04:37.395100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.356 16:04:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:00.356 16:04:39 -- common/autotest_common.sh@850 -- # return 0 00:22:00.356 16:04:39 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:00.356 16:04:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:00.356 16:04:39 -- common/autotest_common.sh@10 -- # set +x 00:22:00.356 16:04:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:00.356 16:04:39 -- target/shutdown.sh@83 -- # kill -9 2508514 00:22:00.356 16:04:39 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:00.356 16:04:39 -- target/shutdown.sh@87 -- # sleep 1 00:22:00.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2508514 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:00.926 16:04:40 -- target/shutdown.sh@88 -- # kill -0 2508014 00:22:00.926 16:04:40 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:00.926 16:04:40 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:00.926 16:04:40 -- nvmf/common.sh@521 -- # config=() 00:22:00.926 16:04:40 -- nvmf/common.sh@521 -- # local subsystem config 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:00.926 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:00.926 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:00.926 { 00:22:00.926 "params": { 00:22:00.926 "name": "Nvme$subsystem", 00:22:00.926 "trtype": "$TEST_TRANSPORT", 00:22:00.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.926 "adrfam": "ipv4", 00:22:00.926 "trsvcid": "$NVMF_PORT", 00:22:00.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.926 "hdgst": ${hdgst:-false}, 00:22:00.926 "ddgst": ${ddgst:-false} 00:22:00.926 }, 00:22:00.926 "method": "bdev_nvme_attach_controller" 00:22:00.926 } 00:22:00.926 EOF 00:22:00.926 )") 00:22:01.186 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:01.186 16:04:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:01.186 16:04:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:01.186 { 00:22:01.186 "params": { 00:22:01.186 "name": "Nvme$subsystem", 00:22:01.186 "trtype": "$TEST_TRANSPORT", 00:22:01.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.186 "adrfam": "ipv4", 00:22:01.186 "trsvcid": "$NVMF_PORT", 00:22:01.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.186 "hdgst": ${hdgst:-false}, 00:22:01.186 "ddgst": ${ddgst:-false} 00:22:01.186 }, 00:22:01.186 "method": "bdev_nvme_attach_controller" 00:22:01.186 } 00:22:01.186 EOF 00:22:01.186 )") 00:22:01.186 16:04:40 -- nvmf/common.sh@543 -- # cat 00:22:01.186 16:04:40 -- nvmf/common.sh@545 -- # jq . 00:22:01.186 16:04:40 -- nvmf/common.sh@546 -- # IFS=, 00:22:01.186 16:04:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:01.186 "params": { 00:22:01.186 "name": "Nvme1", 00:22:01.186 "trtype": "tcp", 00:22:01.186 "traddr": "10.0.0.2", 00:22:01.186 "adrfam": "ipv4", 00:22:01.186 "trsvcid": "4420", 00:22:01.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.186 "hdgst": false, 00:22:01.186 "ddgst": false 00:22:01.186 }, 00:22:01.186 "method": "bdev_nvme_attach_controller" 00:22:01.186 },{ 00:22:01.186 "params": { 00:22:01.186 "name": "Nvme2", 00:22:01.186 "trtype": "tcp", 00:22:01.186 "traddr": "10.0.0.2", 00:22:01.186 "adrfam": "ipv4", 00:22:01.186 "trsvcid": "4420", 00:22:01.186 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 },{ 00:22:01.187 "params": { 00:22:01.187 "name": "Nvme3", 00:22:01.187 "trtype": "tcp", 00:22:01.187 "traddr": "10.0.0.2", 00:22:01.187 "adrfam": "ipv4", 00:22:01.187 "trsvcid": "4420", 00:22:01.187 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 },{ 00:22:01.187 "params": { 00:22:01.187 "name": "Nvme4", 00:22:01.187 "trtype": "tcp", 00:22:01.187 "traddr": "10.0.0.2", 00:22:01.187 "adrfam": "ipv4", 00:22:01.187 "trsvcid": "4420", 00:22:01.187 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 },{ 00:22:01.187 "params": { 00:22:01.187 "name": "Nvme5", 00:22:01.187 "trtype": "tcp", 00:22:01.187 "traddr": "10.0.0.2", 00:22:01.187 "adrfam": "ipv4", 00:22:01.187 "trsvcid": "4420", 00:22:01.187 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 },{ 00:22:01.187 "params": { 00:22:01.187 "name": "Nvme6", 00:22:01.187 "trtype": "tcp", 00:22:01.187 "traddr": "10.0.0.2", 00:22:01.187 "adrfam": "ipv4", 00:22:01.187 "trsvcid": "4420", 00:22:01.187 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 },{ 00:22:01.187 "params": { 00:22:01.187 "name": "Nvme7", 00:22:01.187 "trtype": "tcp", 00:22:01.187 "traddr": "10.0.0.2", 00:22:01.187 "adrfam": "ipv4", 00:22:01.187 "trsvcid": "4420", 00:22:01.187 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 },{ 00:22:01.187 "params": { 00:22:01.187 "name": "Nvme8", 00:22:01.187 "trtype": "tcp", 00:22:01.187 "traddr": "10.0.0.2", 00:22:01.187 "adrfam": "ipv4", 00:22:01.187 "trsvcid": "4420", 00:22:01.187 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 },{ 00:22:01.187 "params": { 00:22:01.187 "name": "Nvme9", 00:22:01.187 "trtype": "tcp", 00:22:01.187 "traddr": "10.0.0.2", 00:22:01.187 "adrfam": "ipv4", 00:22:01.187 "trsvcid": "4420", 00:22:01.187 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 },{ 00:22:01.187 "params": { 00:22:01.187 "name": "Nvme10", 00:22:01.187 "trtype": "tcp", 00:22:01.187 "traddr": "10.0.0.2", 00:22:01.187 "adrfam": "ipv4", 00:22:01.187 "trsvcid": "4420", 00:22:01.187 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:01.187 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:01.187 "hdgst": false, 00:22:01.187 "ddgst": false 00:22:01.187 }, 00:22:01.187 "method": "bdev_nvme_attach_controller" 00:22:01.187 }' 00:22:01.187 [2024-04-26 16:04:40.624551] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:01.187 [2024-04-26 16:04:40.624641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509009 ] 00:22:01.187 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.187 [2024-04-26 16:04:40.729408] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.446 [2024-04-26 16:04:40.964626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.353 Running I/O for 1 seconds... 00:22:04.403 00:22:04.403 Latency(us) 00:22:04.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.403 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme1n1 : 1.09 235.09 14.69 0.00 0.00 268856.10 21655.37 273541.57 00:22:04.403 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme2n1 : 1.09 234.92 14.68 0.00 0.00 265146.99 23137.06 237069.36 00:22:04.403 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme3n1 : 1.11 231.51 14.47 0.00 0.00 265132.97 23934.89 251658.24 00:22:04.403 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme4n1 : 1.14 279.53 17.47 0.00 0.00 216371.20 22909.11 246187.41 00:22:04.403 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme5n1 : 1.12 228.83 14.30 0.00 0.00 259427.73 22567.18 244363.80 00:22:04.403 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme6n1 : 1.12 171.50 10.72 0.00 0.00 340796.99 24162.84 322779.05 00:22:04.403 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme7n1 : 1.15 278.38 17.40 0.00 0.00 207196.52 21427.42 231598.53 00:22:04.403 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme8n1 : 1.17 273.82 17.11 0.00 0.00 207643.16 21085.50 224304.08 00:22:04.403 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme9n1 : 1.16 220.49 13.78 0.00 0.00 253423.08 23934.89 264423.51 00:22:04.403 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.403 Verification LBA range: start 0x0 length 0x400 00:22:04.403 Nvme10n1 : 1.19 267.79 16.74 0.00 0.00 206212.59 16070.57 248011.02 00:22:04.403 =================================================================================================================== 00:22:04.403 Total : 2421.85 151.37 0.00 0.00 243196.65 16070.57 322779.05 00:22:05.807 16:04:45 -- target/shutdown.sh@94 -- # stoptarget 00:22:05.807 16:04:45 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:05.807 16:04:45 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:05.807 16:04:45 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:05.807 16:04:45 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:05.807 16:04:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:05.807 16:04:45 -- nvmf/common.sh@117 -- # sync 00:22:05.807 16:04:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:05.807 16:04:45 -- nvmf/common.sh@120 -- # set +e 00:22:05.807 16:04:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:05.808 16:04:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:05.808 rmmod nvme_tcp 00:22:05.808 rmmod nvme_fabrics 00:22:05.808 rmmod nvme_keyring 00:22:05.808 16:04:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:05.808 16:04:45 -- nvmf/common.sh@124 -- # set -e 00:22:05.808 16:04:45 -- nvmf/common.sh@125 -- # return 0 00:22:05.808 16:04:45 -- nvmf/common.sh@478 -- # '[' -n 2508014 ']' 00:22:05.808 16:04:45 -- nvmf/common.sh@479 -- # killprocess 2508014 00:22:05.808 16:04:45 -- common/autotest_common.sh@936 -- # '[' -z 2508014 ']' 00:22:05.808 16:04:45 -- common/autotest_common.sh@940 -- # kill -0 2508014 00:22:05.808 16:04:45 -- common/autotest_common.sh@941 -- # uname 00:22:05.808 16:04:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:05.808 16:04:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2508014 00:22:05.808 16:04:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:05.808 16:04:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:05.808 16:04:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2508014' 00:22:05.808 killing process with pid 2508014 00:22:05.808 16:04:45 -- common/autotest_common.sh@955 -- # kill 2508014 00:22:05.808 16:04:45 -- common/autotest_common.sh@960 -- # wait 2508014 00:22:09.102 16:04:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:09.102 16:04:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:09.102 16:04:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:09.102 16:04:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.102 16:04:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.102 16:04:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.102 16:04:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.102 16:04:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.012 16:04:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.012 00:22:11.012 real 0m20.901s 00:22:11.012 user 0m59.186s 00:22:11.012 sys 0m5.718s 00:22:11.012 16:04:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:11.012 16:04:50 -- common/autotest_common.sh@10 -- # set +x 00:22:11.012 ************************************ 00:22:11.012 END TEST nvmf_shutdown_tc1 00:22:11.012 ************************************ 00:22:11.012 16:04:50 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:11.012 16:04:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:11.012 16:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.012 16:04:50 -- common/autotest_common.sh@10 -- # set +x 00:22:11.272 ************************************ 00:22:11.272 START TEST nvmf_shutdown_tc2 00:22:11.272 ************************************ 00:22:11.272 16:04:50 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:22:11.272 16:04:50 -- target/shutdown.sh@99 -- # starttarget 00:22:11.272 16:04:50 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:11.272 16:04:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:11.272 16:04:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.272 16:04:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:11.272 16:04:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:11.272 16:04:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:11.272 16:04:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.272 16:04:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.272 16:04:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.272 16:04:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:11.272 16:04:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.272 16:04:50 -- common/autotest_common.sh@10 -- # set +x 00:22:11.272 16:04:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:11.272 16:04:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.272 16:04:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.272 16:04:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.272 16:04:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.272 16:04:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.272 16:04:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.272 16:04:50 -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.272 16:04:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.272 16:04:50 -- nvmf/common.sh@296 -- # e810=() 00:22:11.272 16:04:50 -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.272 16:04:50 -- nvmf/common.sh@297 -- # x722=() 00:22:11.272 16:04:50 -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.272 16:04:50 -- nvmf/common.sh@298 -- # mlx=() 00:22:11.272 16:04:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.272 16:04:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.272 16:04:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.272 16:04:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.272 16:04:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.272 16:04:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.272 16:04:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:11.272 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:11.272 16:04:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.272 16:04:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:11.272 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:11.272 16:04:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.272 16:04:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.272 16:04:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.272 16:04:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.272 16:04:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:11.272 16:04:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.272 16:04:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:11.273 Found net devices under 0000:86:00.0: cvl_0_0 00:22:11.273 16:04:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.273 16:04:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.273 16:04:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.273 16:04:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:11.273 16:04:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.273 16:04:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:11.273 Found net devices under 0000:86:00.1: cvl_0_1 00:22:11.273 16:04:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.273 16:04:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:11.273 16:04:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:11.273 16:04:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:11.273 16:04:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:11.273 16:04:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:11.273 16:04:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.273 16:04:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.273 16:04:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.273 16:04:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.273 16:04:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.273 16:04:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.273 16:04:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.273 16:04:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.273 16:04:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.273 16:04:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.273 16:04:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.273 16:04:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.273 16:04:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.273 16:04:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.273 16:04:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.273 16:04:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.273 16:04:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.532 16:04:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.532 16:04:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.532 16:04:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:22:11.532 00:22:11.532 --- 10.0.0.2 ping statistics --- 00:22:11.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.533 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:22:11.533 16:04:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:22:11.533 00:22:11.533 --- 10.0.0.1 ping statistics --- 00:22:11.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.533 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:22:11.533 16:04:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.533 16:04:51 -- nvmf/common.sh@411 -- # return 0 00:22:11.533 16:04:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:11.533 16:04:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.533 16:04:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:11.533 16:04:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:11.533 16:04:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.533 16:04:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:11.533 16:04:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:11.533 16:04:51 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:11.533 16:04:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:11.533 16:04:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:11.533 16:04:51 -- common/autotest_common.sh@10 -- # set +x 00:22:11.533 16:04:51 -- nvmf/common.sh@470 -- # nvmfpid=2510948 00:22:11.533 16:04:51 -- nvmf/common.sh@471 -- # waitforlisten 2510948 00:22:11.533 16:04:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:11.533 16:04:51 -- common/autotest_common.sh@817 -- # '[' -z 2510948 ']' 00:22:11.533 16:04:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.533 16:04:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:11.533 16:04:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.533 16:04:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:11.533 16:04:51 -- common/autotest_common.sh@10 -- # set +x 00:22:11.533 [2024-04-26 16:04:51.168450] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:11.533 [2024-04-26 16:04:51.168537] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.792 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.792 [2024-04-26 16:04:51.277581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.052 [2024-04-26 16:04:51.491660] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.052 [2024-04-26 16:04:51.491703] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.052 [2024-04-26 16:04:51.491712] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.052 [2024-04-26 16:04:51.491722] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.052 [2024-04-26 16:04:51.491729] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.052 [2024-04-26 16:04:51.491856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.052 [2024-04-26 16:04:51.491923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.052 [2024-04-26 16:04:51.492006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.052 [2024-04-26 16:04:51.492028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:12.311 16:04:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:12.311 16:04:51 -- common/autotest_common.sh@850 -- # return 0 00:22:12.311 16:04:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:12.311 16:04:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:12.311 16:04:51 -- common/autotest_common.sh@10 -- # set +x 00:22:12.311 16:04:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.311 16:04:51 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.311 16:04:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.311 16:04:51 -- common/autotest_common.sh@10 -- # set +x 00:22:12.311 [2024-04-26 16:04:51.985108] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.311 16:04:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:12.571 16:04:51 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:12.571 16:04:51 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:12.571 16:04:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:12.571 16:04:51 -- common/autotest_common.sh@10 -- # set +x 00:22:12.571 16:04:51 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.571 16:04:52 -- target/shutdown.sh@28 -- # cat 00:22:12.571 16:04:52 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:12.571 16:04:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:12.571 16:04:52 -- common/autotest_common.sh@10 -- # set +x 00:22:12.571 Malloc1 00:22:12.571 [2024-04-26 16:04:52.154132] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.571 Malloc2 00:22:12.831 Malloc3 00:22:12.831 Malloc4 00:22:13.089 Malloc5 00:22:13.089 Malloc6 00:22:13.348 Malloc7 00:22:13.348 Malloc8 00:22:13.607 Malloc9 00:22:13.607 Malloc10 00:22:13.607 16:04:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:13.607 16:04:53 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:13.607 16:04:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:13.607 16:04:53 -- common/autotest_common.sh@10 -- # set +x 00:22:13.607 16:04:53 -- target/shutdown.sh@103 -- # perfpid=2511248 00:22:13.607 16:04:53 -- target/shutdown.sh@104 -- # waitforlisten 2511248 /var/tmp/bdevperf.sock 00:22:13.607 16:04:53 -- common/autotest_common.sh@817 -- # '[' -z 2511248 ']' 00:22:13.607 16:04:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.607 16:04:53 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:13.607 16:04:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:13.607 16:04:53 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:13.607 16:04:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.607 16:04:53 -- nvmf/common.sh@521 -- # config=() 00:22:13.607 16:04:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:13.607 16:04:53 -- nvmf/common.sh@521 -- # local subsystem config 00:22:13.607 16:04:53 -- common/autotest_common.sh@10 -- # set +x 00:22:13.607 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.607 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.607 { 00:22:13.607 "params": { 00:22:13.607 "name": "Nvme$subsystem", 00:22:13.607 "trtype": "$TEST_TRANSPORT", 00:22:13.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.607 "adrfam": "ipv4", 00:22:13.607 "trsvcid": "$NVMF_PORT", 00:22:13.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.607 "hdgst": ${hdgst:-false}, 00:22:13.607 "ddgst": ${ddgst:-false} 00:22:13.607 }, 00:22:13.607 "method": "bdev_nvme_attach_controller" 00:22:13.607 } 00:22:13.607 EOF 00:22:13.607 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:13.608 { 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme$subsystem", 00:22:13.608 "trtype": "$TEST_TRANSPORT", 00:22:13.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "$NVMF_PORT", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.608 "hdgst": ${hdgst:-false}, 00:22:13.608 "ddgst": ${ddgst:-false} 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 } 00:22:13.608 EOF 00:22:13.608 )") 00:22:13.608 16:04:53 -- nvmf/common.sh@543 -- # cat 00:22:13.608 16:04:53 -- nvmf/common.sh@545 -- # jq . 00:22:13.608 [2024-04-26 16:04:53.279649] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:13.608 [2024-04-26 16:04:53.279737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511248 ] 00:22:13.608 16:04:53 -- nvmf/common.sh@546 -- # IFS=, 00:22:13.608 16:04:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme1", 00:22:13.608 "trtype": "tcp", 00:22:13.608 "traddr": "10.0.0.2", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "4420", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.608 "hdgst": false, 00:22:13.608 "ddgst": false 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 },{ 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme2", 00:22:13.608 "trtype": "tcp", 00:22:13.608 "traddr": "10.0.0.2", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "4420", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:13.608 "hdgst": false, 00:22:13.608 "ddgst": false 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 },{ 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme3", 00:22:13.608 "trtype": "tcp", 00:22:13.608 "traddr": "10.0.0.2", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "4420", 00:22:13.608 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:13.608 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:13.608 "hdgst": false, 00:22:13.608 "ddgst": false 00:22:13.608 }, 00:22:13.608 "method": "bdev_nvme_attach_controller" 00:22:13.608 },{ 00:22:13.608 "params": { 00:22:13.608 "name": "Nvme4", 00:22:13.608 "trtype": "tcp", 00:22:13.608 "traddr": "10.0.0.2", 00:22:13.608 "adrfam": "ipv4", 00:22:13.608 "trsvcid": "4420", 00:22:13.609 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:13.609 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:13.609 "hdgst": false, 00:22:13.609 "ddgst": false 00:22:13.609 }, 00:22:13.609 "method": "bdev_nvme_attach_controller" 00:22:13.609 },{ 00:22:13.609 "params": { 00:22:13.609 "name": "Nvme5", 00:22:13.609 "trtype": "tcp", 00:22:13.609 "traddr": "10.0.0.2", 00:22:13.609 "adrfam": "ipv4", 00:22:13.609 "trsvcid": "4420", 00:22:13.609 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:13.609 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:13.609 "hdgst": false, 00:22:13.609 "ddgst": false 00:22:13.609 }, 00:22:13.609 "method": "bdev_nvme_attach_controller" 00:22:13.609 },{ 00:22:13.609 "params": { 00:22:13.609 "name": "Nvme6", 00:22:13.609 "trtype": "tcp", 00:22:13.609 "traddr": "10.0.0.2", 00:22:13.609 "adrfam": "ipv4", 00:22:13.609 "trsvcid": "4420", 00:22:13.609 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:13.609 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:13.609 "hdgst": false, 00:22:13.609 "ddgst": false 00:22:13.609 }, 00:22:13.609 "method": "bdev_nvme_attach_controller" 00:22:13.609 },{ 00:22:13.609 "params": { 00:22:13.609 "name": "Nvme7", 00:22:13.609 "trtype": "tcp", 00:22:13.609 "traddr": "10.0.0.2", 00:22:13.609 "adrfam": "ipv4", 00:22:13.609 "trsvcid": "4420", 00:22:13.609 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:13.609 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:13.609 "hdgst": false, 00:22:13.609 "ddgst": false 00:22:13.609 }, 00:22:13.609 "method": "bdev_nvme_attach_controller" 00:22:13.609 },{ 00:22:13.609 "params": { 00:22:13.609 "name": "Nvme8", 00:22:13.609 "trtype": "tcp", 00:22:13.609 "traddr": "10.0.0.2", 00:22:13.609 "adrfam": "ipv4", 00:22:13.609 "trsvcid": "4420", 00:22:13.609 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:13.609 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:13.609 "hdgst": false, 00:22:13.609 "ddgst": false 00:22:13.609 }, 00:22:13.609 "method": "bdev_nvme_attach_controller" 00:22:13.609 },{ 00:22:13.609 "params": { 00:22:13.609 "name": "Nvme9", 00:22:13.609 "trtype": "tcp", 00:22:13.609 "traddr": "10.0.0.2", 00:22:13.609 "adrfam": "ipv4", 00:22:13.609 "trsvcid": "4420", 00:22:13.609 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:13.609 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:13.609 "hdgst": false, 00:22:13.609 "ddgst": false 00:22:13.609 }, 00:22:13.609 "method": "bdev_nvme_attach_controller" 00:22:13.609 },{ 00:22:13.609 "params": { 00:22:13.609 "name": "Nvme10", 00:22:13.609 "trtype": "tcp", 00:22:13.609 "traddr": "10.0.0.2", 00:22:13.609 "adrfam": "ipv4", 00:22:13.609 "trsvcid": "4420", 00:22:13.609 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:13.609 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:13.609 "hdgst": false, 00:22:13.609 "ddgst": false 00:22:13.609 }, 00:22:13.609 "method": "bdev_nvme_attach_controller" 00:22:13.609 }' 00:22:13.867 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.867 [2024-04-26 16:04:53.385874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.125 [2024-04-26 16:04:53.622260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.033 Running I/O for 10 seconds... 00:22:16.293 16:04:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:16.293 16:04:55 -- common/autotest_common.sh@850 -- # return 0 00:22:16.293 16:04:55 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:16.293 16:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.293 16:04:55 -- common/autotest_common.sh@10 -- # set +x 00:22:16.293 16:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.293 16:04:55 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:16.293 16:04:55 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:16.293 16:04:55 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:16.293 16:04:55 -- target/shutdown.sh@57 -- # local ret=1 00:22:16.293 16:04:55 -- target/shutdown.sh@58 -- # local i 00:22:16.293 16:04:55 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:16.293 16:04:55 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:16.293 16:04:55 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:16.293 16:04:55 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:16.293 16:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.293 16:04:55 -- common/autotest_common.sh@10 -- # set +x 00:22:16.293 16:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.293 16:04:55 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:16.293 16:04:55 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:16.293 16:04:55 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:16.553 16:04:56 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:16.553 16:04:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:16.553 16:04:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:16.553 16:04:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:16.553 16:04:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.553 16:04:56 -- common/autotest_common.sh@10 -- # set +x 00:22:16.812 16:04:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.812 16:04:56 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:16.812 16:04:56 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:16.812 16:04:56 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:17.072 16:04:56 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:17.072 16:04:56 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:17.072 16:04:56 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:17.072 16:04:56 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:17.072 16:04:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.072 16:04:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.072 16:04:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.072 16:04:56 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:17.072 16:04:56 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:17.072 16:04:56 -- target/shutdown.sh@64 -- # ret=0 00:22:17.072 16:04:56 -- target/shutdown.sh@65 -- # break 00:22:17.072 16:04:56 -- target/shutdown.sh@69 -- # return 0 00:22:17.072 16:04:56 -- target/shutdown.sh@110 -- # killprocess 2511248 00:22:17.072 16:04:56 -- common/autotest_common.sh@936 -- # '[' -z 2511248 ']' 00:22:17.072 16:04:56 -- common/autotest_common.sh@940 -- # kill -0 2511248 00:22:17.072 16:04:56 -- common/autotest_common.sh@941 -- # uname 00:22:17.072 16:04:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:17.072 16:04:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2511248 00:22:17.072 16:04:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:17.072 16:04:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:17.072 16:04:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2511248' 00:22:17.072 killing process with pid 2511248 00:22:17.072 16:04:56 -- common/autotest_common.sh@955 -- # kill 2511248 00:22:17.072 16:04:56 -- common/autotest_common.sh@960 -- # wait 2511248 00:22:17.072 Received shutdown signal, test time was about 0.984340 seconds 00:22:17.072 00:22:17.072 Latency(us) 00:22:17.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.072 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme1n1 : 0.92 207.94 13.00 0.00 0.00 304382.37 27582.11 240716.58 00:22:17.072 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme2n1 : 0.98 260.26 16.27 0.00 0.00 238597.79 25872.47 300895.72 00:22:17.072 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme3n1 : 0.93 279.83 17.49 0.00 0.00 217179.13 4445.05 186008.26 00:22:17.072 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme4n1 : 0.94 204.80 12.80 0.00 0.00 291865.23 25986.45 291777.67 00:22:17.072 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme5n1 : 0.94 203.51 12.72 0.00 0.00 288065.15 23934.89 271717.95 00:22:17.072 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme6n1 : 0.96 265.32 16.58 0.00 0.00 216889.43 39435.58 207891.59 00:22:17.072 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme7n1 : 0.96 267.12 16.70 0.00 0.00 211114.74 22567.18 226127.69 00:22:17.072 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme8n1 : 0.95 268.34 16.77 0.00 0.00 205780.37 22795.13 184184.65 00:22:17.072 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme9n1 : 0.92 208.70 13.04 0.00 0.00 257104.44 24048.86 257129.07 00:22:17.072 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:17.072 Verification LBA range: start 0x0 length 0x400 00:22:17.072 Nvme10n1 : 0.98 196.66 12.29 0.00 0.00 270490.64 23137.06 328249.88 00:22:17.072 =================================================================================================================== 00:22:17.072 Total : 2362.49 147.66 0.00 0.00 245478.81 4445.05 328249.88 00:22:18.452 16:04:57 -- target/shutdown.sh@113 -- # sleep 1 00:22:19.398 16:04:58 -- target/shutdown.sh@114 -- # kill -0 2510948 00:22:19.398 16:04:58 -- target/shutdown.sh@116 -- # stoptarget 00:22:19.398 16:04:58 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:19.398 16:04:58 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:19.398 16:04:58 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:19.398 16:04:58 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:19.398 16:04:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:19.398 16:04:58 -- nvmf/common.sh@117 -- # sync 00:22:19.398 16:04:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.398 16:04:58 -- nvmf/common.sh@120 -- # set +e 00:22:19.398 16:04:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.398 16:04:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.398 rmmod nvme_tcp 00:22:19.398 rmmod nvme_fabrics 00:22:19.398 rmmod nvme_keyring 00:22:19.398 16:04:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.398 16:04:58 -- nvmf/common.sh@124 -- # set -e 00:22:19.398 16:04:58 -- nvmf/common.sh@125 -- # return 0 00:22:19.398 16:04:58 -- nvmf/common.sh@478 -- # '[' -n 2510948 ']' 00:22:19.398 16:04:58 -- nvmf/common.sh@479 -- # killprocess 2510948 00:22:19.398 16:04:58 -- common/autotest_common.sh@936 -- # '[' -z 2510948 ']' 00:22:19.398 16:04:58 -- common/autotest_common.sh@940 -- # kill -0 2510948 00:22:19.398 16:04:58 -- common/autotest_common.sh@941 -- # uname 00:22:19.398 16:04:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:19.398 16:04:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2510948 00:22:19.398 16:04:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:19.398 16:04:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:19.398 16:04:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2510948' 00:22:19.398 killing process with pid 2510948 00:22:19.398 16:04:58 -- common/autotest_common.sh@955 -- # kill 2510948 00:22:19.398 16:04:58 -- common/autotest_common.sh@960 -- # wait 2510948 00:22:22.694 16:05:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:22.694 16:05:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:22.694 16:05:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:22.694 16:05:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.694 16:05:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.694 16:05:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.694 16:05:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.694 16:05:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.601 16:05:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:24.601 00:22:24.601 real 0m13.483s 00:22:24.601 user 0m45.618s 00:22:24.601 sys 0m1.691s 00:22:24.601 16:05:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:24.601 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:22:24.601 ************************************ 00:22:24.601 END TEST nvmf_shutdown_tc2 00:22:24.601 ************************************ 00:22:24.601 16:05:04 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:24.601 16:05:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:24.601 16:05:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:24.601 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:22:24.860 ************************************ 00:22:24.860 START TEST nvmf_shutdown_tc3 00:22:24.860 ************************************ 00:22:24.860 16:05:04 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:22:24.860 16:05:04 -- target/shutdown.sh@121 -- # starttarget 00:22:24.860 16:05:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:24.860 16:05:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:24.860 16:05:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.860 16:05:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:24.860 16:05:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:24.860 16:05:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:24.860 16:05:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.860 16:05:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.860 16:05:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.861 16:05:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:24.861 16:05:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:24.861 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:22:24.861 16:05:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:24.861 16:05:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.861 16:05:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.861 16:05:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.861 16:05:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.861 16:05:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.861 16:05:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.861 16:05:04 -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.861 16:05:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.861 16:05:04 -- nvmf/common.sh@296 -- # e810=() 00:22:24.861 16:05:04 -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.861 16:05:04 -- nvmf/common.sh@297 -- # x722=() 00:22:24.861 16:05:04 -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.861 16:05:04 -- nvmf/common.sh@298 -- # mlx=() 00:22:24.861 16:05:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.861 16:05:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.861 16:05:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.861 16:05:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.861 16:05:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.861 16:05:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.861 16:05:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:24.861 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:24.861 16:05:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.861 16:05:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:24.861 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:24.861 16:05:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.861 16:05:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.861 16:05:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.861 16:05:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:24.861 16:05:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.861 16:05:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:24.861 Found net devices under 0000:86:00.0: cvl_0_0 00:22:24.861 16:05:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.861 16:05:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.861 16:05:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.861 16:05:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:24.861 16:05:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.861 16:05:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:24.861 Found net devices under 0000:86:00.1: cvl_0_1 00:22:24.861 16:05:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.861 16:05:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:24.861 16:05:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:24.861 16:05:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:24.861 16:05:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:24.861 16:05:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.861 16:05:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.861 16:05:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.861 16:05:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:24.861 16:05:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.861 16:05:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.861 16:05:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:24.861 16:05:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.861 16:05:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.861 16:05:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:24.861 16:05:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:24.861 16:05:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.861 16:05:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.121 16:05:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.121 16:05:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.121 16:05:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.121 16:05:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.121 16:05:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.121 16:05:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.121 16:05:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:25.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:22:25.121 00:22:25.121 --- 10.0.0.2 ping statistics --- 00:22:25.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.121 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:25.121 16:05:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:22:25.121 00:22:25.121 --- 10.0.0.1 ping statistics --- 00:22:25.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.121 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:25.121 16:05:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.121 16:05:04 -- nvmf/common.sh@411 -- # return 0 00:22:25.121 16:05:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:25.121 16:05:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.121 16:05:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:25.121 16:05:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:25.121 16:05:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.121 16:05:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:25.121 16:05:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:25.121 16:05:04 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:25.121 16:05:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:25.121 16:05:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:25.121 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:22:25.121 16:05:04 -- nvmf/common.sh@470 -- # nvmfpid=2513213 00:22:25.121 16:05:04 -- nvmf/common.sh@471 -- # waitforlisten 2513213 00:22:25.121 16:05:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:25.121 16:05:04 -- common/autotest_common.sh@817 -- # '[' -z 2513213 ']' 00:22:25.121 16:05:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.121 16:05:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.121 16:05:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.121 16:05:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.121 16:05:04 -- common/autotest_common.sh@10 -- # set +x 00:22:25.380 [2024-04-26 16:05:04.848296] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:25.380 [2024-04-26 16:05:04.848390] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.380 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.380 [2024-04-26 16:05:04.959181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.639 [2024-04-26 16:05:05.183807] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.639 [2024-04-26 16:05:05.183863] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.639 [2024-04-26 16:05:05.183873] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.639 [2024-04-26 16:05:05.183884] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.639 [2024-04-26 16:05:05.183892] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.639 [2024-04-26 16:05:05.184017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.639 [2024-04-26 16:05:05.184102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.639 [2024-04-26 16:05:05.184183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.639 [2024-04-26 16:05:05.184204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:26.207 16:05:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.207 16:05:05 -- common/autotest_common.sh@850 -- # return 0 00:22:26.207 16:05:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:26.207 16:05:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:26.207 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:22:26.207 16:05:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.207 16:05:05 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.207 16:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.207 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:22:26.207 [2024-04-26 16:05:05.657319] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.207 16:05:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.207 16:05:05 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:26.207 16:05:05 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:26.207 16:05:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:26.207 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:22:26.207 16:05:05 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:26.207 16:05:05 -- target/shutdown.sh@28 -- # cat 00:22:26.207 16:05:05 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:26.207 16:05:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.207 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:22:26.207 Malloc1 00:22:26.207 [2024-04-26 16:05:05.827239] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.466 Malloc2 00:22:26.466 Malloc3 00:22:26.725 Malloc4 00:22:26.725 Malloc5 00:22:26.725 Malloc6 00:22:26.984 Malloc7 00:22:26.984 Malloc8 00:22:27.244 Malloc9 00:22:27.244 Malloc10 00:22:27.244 16:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.244 16:05:06 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:27.244 16:05:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:27.244 16:05:06 -- common/autotest_common.sh@10 -- # set +x 00:22:27.244 16:05:06 -- target/shutdown.sh@125 -- # perfpid=2513712 00:22:27.244 16:05:06 -- target/shutdown.sh@126 -- # waitforlisten 2513712 /var/tmp/bdevperf.sock 00:22:27.244 16:05:06 -- common/autotest_common.sh@817 -- # '[' -z 2513712 ']' 00:22:27.244 16:05:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.244 16:05:06 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:27.244 16:05:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:27.244 16:05:06 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:27.244 16:05:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.244 16:05:06 -- nvmf/common.sh@521 -- # config=() 00:22:27.244 16:05:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:27.244 16:05:06 -- nvmf/common.sh@521 -- # local subsystem config 00:22:27.244 16:05:06 -- common/autotest_common.sh@10 -- # set +x 00:22:27.244 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.244 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.244 { 00:22:27.244 "params": { 00:22:27.244 "name": "Nvme$subsystem", 00:22:27.244 "trtype": "$TEST_TRANSPORT", 00:22:27.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.244 "adrfam": "ipv4", 00:22:27.244 "trsvcid": "$NVMF_PORT", 00:22:27.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.244 "hdgst": ${hdgst:-false}, 00:22:27.244 "ddgst": ${ddgst:-false} 00:22:27.244 }, 00:22:27.244 "method": "bdev_nvme_attach_controller" 00:22:27.244 } 00:22:27.244 EOF 00:22:27.244 )") 00:22:27.244 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.244 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.244 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.244 { 00:22:27.244 "params": { 00:22:27.244 "name": "Nvme$subsystem", 00:22:27.244 "trtype": "$TEST_TRANSPORT", 00:22:27.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.244 "adrfam": "ipv4", 00:22:27.244 "trsvcid": "$NVMF_PORT", 00:22:27.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.244 "hdgst": ${hdgst:-false}, 00:22:27.244 "ddgst": ${ddgst:-false} 00:22:27.244 }, 00:22:27.244 "method": "bdev_nvme_attach_controller" 00:22:27.244 } 00:22:27.244 EOF 00:22:27.244 )") 00:22:27.244 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.244 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.244 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.244 { 00:22:27.244 "params": { 00:22:27.244 "name": "Nvme$subsystem", 00:22:27.244 "trtype": "$TEST_TRANSPORT", 00:22:27.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.244 "adrfam": "ipv4", 00:22:27.244 "trsvcid": "$NVMF_PORT", 00:22:27.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.244 "hdgst": ${hdgst:-false}, 00:22:27.244 "ddgst": ${ddgst:-false} 00:22:27.244 }, 00:22:27.244 "method": "bdev_nvme_attach_controller" 00:22:27.244 } 00:22:27.244 EOF 00:22:27.244 )") 00:22:27.244 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.504 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.504 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.504 { 00:22:27.504 "params": { 00:22:27.504 "name": "Nvme$subsystem", 00:22:27.504 "trtype": "$TEST_TRANSPORT", 00:22:27.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.504 "adrfam": "ipv4", 00:22:27.504 "trsvcid": "$NVMF_PORT", 00:22:27.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.504 "hdgst": ${hdgst:-false}, 00:22:27.504 "ddgst": ${ddgst:-false} 00:22:27.504 }, 00:22:27.504 "method": "bdev_nvme_attach_controller" 00:22:27.504 } 00:22:27.504 EOF 00:22:27.504 )") 00:22:27.504 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.504 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.504 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.504 { 00:22:27.504 "params": { 00:22:27.504 "name": "Nvme$subsystem", 00:22:27.504 "trtype": "$TEST_TRANSPORT", 00:22:27.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.504 "adrfam": "ipv4", 00:22:27.504 "trsvcid": "$NVMF_PORT", 00:22:27.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.504 "hdgst": ${hdgst:-false}, 00:22:27.504 "ddgst": ${ddgst:-false} 00:22:27.504 }, 00:22:27.504 "method": "bdev_nvme_attach_controller" 00:22:27.504 } 00:22:27.504 EOF 00:22:27.504 )") 00:22:27.504 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.504 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.504 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.504 { 00:22:27.504 "params": { 00:22:27.504 "name": "Nvme$subsystem", 00:22:27.504 "trtype": "$TEST_TRANSPORT", 00:22:27.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.504 "adrfam": "ipv4", 00:22:27.504 "trsvcid": "$NVMF_PORT", 00:22:27.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.504 "hdgst": ${hdgst:-false}, 00:22:27.504 "ddgst": ${ddgst:-false} 00:22:27.504 }, 00:22:27.504 "method": "bdev_nvme_attach_controller" 00:22:27.504 } 00:22:27.504 EOF 00:22:27.504 )") 00:22:27.504 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.504 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.504 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.504 { 00:22:27.504 "params": { 00:22:27.504 "name": "Nvme$subsystem", 00:22:27.504 "trtype": "$TEST_TRANSPORT", 00:22:27.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.504 "adrfam": "ipv4", 00:22:27.504 "trsvcid": "$NVMF_PORT", 00:22:27.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.505 "hdgst": ${hdgst:-false}, 00:22:27.505 "ddgst": ${ddgst:-false} 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 } 00:22:27.505 EOF 00:22:27.505 )") 00:22:27.505 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.505 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.505 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.505 { 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme$subsystem", 00:22:27.505 "trtype": "$TEST_TRANSPORT", 00:22:27.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "$NVMF_PORT", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.505 "hdgst": ${hdgst:-false}, 00:22:27.505 "ddgst": ${ddgst:-false} 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 } 00:22:27.505 EOF 00:22:27.505 )") 00:22:27.505 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.505 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.505 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.505 { 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme$subsystem", 00:22:27.505 "trtype": "$TEST_TRANSPORT", 00:22:27.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "$NVMF_PORT", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.505 "hdgst": ${hdgst:-false}, 00:22:27.505 "ddgst": ${ddgst:-false} 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 } 00:22:27.505 EOF 00:22:27.505 )") 00:22:27.505 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.505 16:05:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:27.505 16:05:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:27.505 { 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme$subsystem", 00:22:27.505 "trtype": "$TEST_TRANSPORT", 00:22:27.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "$NVMF_PORT", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.505 "hdgst": ${hdgst:-false}, 00:22:27.505 "ddgst": ${ddgst:-false} 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 } 00:22:27.505 EOF 00:22:27.505 )") 00:22:27.505 16:05:06 -- nvmf/common.sh@543 -- # cat 00:22:27.505 16:05:06 -- nvmf/common.sh@545 -- # jq . 00:22:27.505 16:05:06 -- nvmf/common.sh@546 -- # IFS=, 00:22:27.505 [2024-04-26 16:05:06.980055] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:27.505 [2024-04-26 16:05:06.980155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513712 ] 00:22:27.505 16:05:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme1", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme2", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme3", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme4", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme5", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme6", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme7", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme8", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme9", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 },{ 00:22:27.505 "params": { 00:22:27.505 "name": "Nvme10", 00:22:27.505 "trtype": "tcp", 00:22:27.505 "traddr": "10.0.0.2", 00:22:27.505 "adrfam": "ipv4", 00:22:27.505 "trsvcid": "4420", 00:22:27.505 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:27.505 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:27.505 "hdgst": false, 00:22:27.505 "ddgst": false 00:22:27.505 }, 00:22:27.505 "method": "bdev_nvme_attach_controller" 00:22:27.505 }' 00:22:27.505 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.505 [2024-04-26 16:05:07.084732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.764 [2024-04-26 16:05:07.316736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.300 Running I/O for 10 seconds... 00:22:30.300 16:05:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:30.300 16:05:09 -- common/autotest_common.sh@850 -- # return 0 00:22:30.300 16:05:09 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:30.300 16:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.300 16:05:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 16:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.300 16:05:09 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.300 16:05:09 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:30.300 16:05:09 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:30.300 16:05:09 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:30.300 16:05:09 -- target/shutdown.sh@57 -- # local ret=1 00:22:30.300 16:05:09 -- target/shutdown.sh@58 -- # local i 00:22:30.300 16:05:09 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:30.300 16:05:09 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:30.300 16:05:09 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.300 16:05:09 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.300 16:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.300 16:05:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 16:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.300 16:05:09 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:30.300 16:05:09 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:30.300 16:05:09 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:30.300 16:05:09 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:30.300 16:05:09 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:30.300 16:05:09 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.300 16:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.300 16:05:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.300 16:05:09 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.300 16:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.558 16:05:10 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:30.558 16:05:10 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:30.558 16:05:10 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:30.831 16:05:10 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:30.831 16:05:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:30.831 16:05:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.831 16:05:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.831 16:05:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.831 16:05:10 -- common/autotest_common.sh@10 -- # set +x 00:22:30.831 16:05:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.831 16:05:10 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:30.831 16:05:10 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:30.831 16:05:10 -- target/shutdown.sh@64 -- # ret=0 00:22:30.831 16:05:10 -- target/shutdown.sh@65 -- # break 00:22:30.831 16:05:10 -- target/shutdown.sh@69 -- # return 0 00:22:30.831 16:05:10 -- target/shutdown.sh@135 -- # killprocess 2513213 00:22:30.831 16:05:10 -- common/autotest_common.sh@936 -- # '[' -z 2513213 ']' 00:22:30.831 16:05:10 -- common/autotest_common.sh@940 -- # kill -0 2513213 00:22:30.831 16:05:10 -- common/autotest_common.sh@941 -- # uname 00:22:30.831 16:05:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.831 16:05:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2513213 00:22:30.831 16:05:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:30.831 16:05:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:30.831 16:05:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2513213' 00:22:30.831 killing process with pid 2513213 00:22:30.831 16:05:10 -- common/autotest_common.sh@955 -- # kill 2513213 00:22:30.831 16:05:10 -- common/autotest_common.sh@960 -- # wait 2513213 00:22:30.831 [2024-04-26 16:05:10.379136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:22:30.831 [2024-04-26 16:05:10.379190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(5) to be set 00:22:30.831 [2024-04-26 16:05:10.381883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.381930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.381955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.381973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.381986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.381996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.382008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.382018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.382029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.382039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.382050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.382086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.382096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.382107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.382117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.382129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-04-26 16:05:10.382138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-04-26 16:05:10.382149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-04-26 16:05:10.382892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-04-26 16:05:10.382903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.382913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.382926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.382937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.382949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.382959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.382970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.382979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.382990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-04-26 16:05:10.383089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:22:30.833 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:12[2024-04-26 16:05:10.383127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-04-26 16:05:10.383141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:22:30.833 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same [2024-04-26 16:05:10.383212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:22:30.833 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-04-26 16:05:10.383327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-04-26 16:05:10.383337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.833 [2024-04-26 16:05:10.383452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383605] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383635] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000019e40 was disconnected and freed. reset controller. 00:22:30.834 [2024-04-26 16:05:10.383638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.383674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.385946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.385971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.385984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.385993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.834 [2024-04-26 16:05:10.386387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.386512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.387200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.835 [2024-04-26 16:05:10.387284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:22:30.835 [2024-04-26 16:05:10.388920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.835 [2024-04-26 16:05:10.389181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.835 [2024-04-26 16:05:10.389200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:22:30.835 [2024-04-26 16:05:10.389217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.389316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.389331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.389344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.389355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.389372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.389381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.389392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.389402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.389411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000016840 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.389943] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.835 [2024-04-26 16:05:10.389981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:22:30.835 [2024-04-26 16:05:10.390456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.835 [2024-04-26 16:05:10.390481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.835 [2024-04-26 16:05:10.390498] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.835 [2024-04-26 16:05:10.390559] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.835 [2024-04-26 16:05:10.390857] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.835 [2024-04-26 16:05:10.398111] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.835 [2024-04-26 16:05:10.398479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.835 [2024-04-26 16:05:10.398783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.835 [2024-04-26 16:05:10.398799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:22:30.835 [2024-04-26 16:05:10.398811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.399818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:22:30.835 [2024-04-26 16:05:10.399886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.399903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.399916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.399927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.399938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.399952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.399963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.399972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.399982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009640 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.400025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.400038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.400058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.400096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.835 [2024-04-26 16:05:10.400116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007840 is same with the state(5) to be set 00:22:30.835 [2024-04-26 16:05:10.400161] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000016840 (9): Bad file descriptor 00:22:30.835 [2024-04-26 16:05:10.400242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-04-26 16:05:10.400257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-04-26 16:05:10.400289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-04-26 16:05:10.400314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-04-26 16:05:10.400337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-04-26 16:05:10.400359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-04-26 16:05:10.400385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-04-26 16:05:10.400397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-04-26 16:05:10.400407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.400982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.400991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.401002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.401012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.401023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.401033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-04-26 16:05:10.401044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-04-26 16:05:10.401054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-04-26 16:05:10.401658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-04-26 16:05:10.401669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001aa40 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.401953] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61400001aa40 was disconnected and freed. reset controller. 00:22:30.837 [2024-04-26 16:05:10.403608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.837 [2024-04-26 16:05:10.403632] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.837 [2024-04-26 16:05:10.403644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.837 [2024-04-26 16:05:10.405399] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.837 [2024-04-26 16:05:10.405426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:30.837 [2024-04-26 16:05:10.405446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:22:30.837 [2024-04-26 16:05:10.405597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.837 [2024-04-26 16:05:10.405733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.405993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.406179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009c80 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.407615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.838 [2024-04-26 16:05:10.407914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.838 [2024-04-26 16:05:10.407930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000009640 with addr=10.0.0.2, port=4420 00:22:30.838 [2024-04-26 16:05:10.407944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009640 is same with the state(5) to be set 00:22:30.838 [2024-04-26 16:05:10.408564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:22:30.838 [2024-04-26 16:05:10.408907] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:30.838 [2024-04-26 16:05:10.408929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:30.838 [2024-04-26 16:05:10.408941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:30.838 [2024-04-26 16:05:10.409205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.838 [2024-04-26 16:05:10.409231] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.838 [2024-04-26 16:05:10.409367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-04-26 16:05:10.409385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-04-26 16:05:10.409411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-04-26 16:05:10.409427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-04-26 16:05:10.409441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-04-26 16:05:10.409452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-04-26 16:05:10.409465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-04-26 16:05:10.409479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-04-26 16:05:10.409492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-04-26 16:05:10.409503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-04-26 16:05:10.409515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-04-26 16:05:10.409526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-04-26 16:05:10.409539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-04-26 16:05:10.409549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-04-26 16:05:10.409562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.409971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:22:30.839 [2024-04-26 16:05:10.409988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.409999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:22:30.839 [2024-04-26 16:05:10.410011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-04-26 16:05:10.410376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-04-26 16:05:10.410388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.410839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.410849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411163] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61400001b640 was disconnected and freed. reset controller. 00:22:30.840 [2024-04-26 16:05:10.411195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-04-26 16:05:10.411500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-04-26 16:05:10.411512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.411983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.411993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-04-26 16:05:10.412225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-04-26 16:05:10.412236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:12[2024-04-26 16:05:10.412431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 16:05:10.412445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 16:05:10.412498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:12[2024-04-26 16:05:10.412585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.842 [2024-04-26 16:05:10.412672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.842 [2024-04-26 16:05:10.412682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.842 [2024-04-26 16:05:10.412804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412971] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61400001bc40 was disconnected and freed. reset controller. 00:22:30.843 [2024-04-26 16:05:10.412979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.412996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.413005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.413013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.413021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.413030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.413038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.413475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.843 [2024-04-26 16:05:10.413769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.843 [2024-04-26 16:05:10.413784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:22:30.843 [2024-04-26 16:05:10.413795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.413839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.413853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.413865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.413875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.413887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.413897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.413907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.413917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.413926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010e40 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.413962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.413978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.413989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.413999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000d240 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.414062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007840 (9): Bad file descriptor 00:22:30.843 [2024-04-26 16:05:10.414103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000b440 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.414218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.843 [2024-04-26 16:05:10.414295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.843 [2024-04-26 16:05:10.414305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000f040 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.414400] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.843 [2024-04-26 16:05:10.415062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.415094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.415104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.415113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.415122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.843 [2024-04-26 16:05:10.415130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.415630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:22:30.844 [2024-04-26 16:05:10.416506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:30.844 [2024-04-26 16:05:10.416532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:30.844 [2024-04-26 16:05:10.416549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000f040 (9): Bad file descriptor 00:22:30.844 [2024-04-26 16:05:10.416564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000d240 (9): Bad file descriptor 00:22:30.844 [2024-04-26 16:05:10.416579] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:22:30.844 [2024-04-26 16:05:10.416788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.844 [2024-04-26 16:05:10.416809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.844 [2024-04-26 16:05:10.416827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.844 [2024-04-26 16:05:10.416838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.844 [2024-04-26 16:05:10.416851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.844 [2024-04-26 16:05:10.416861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.844 [2024-04-26 16:05:10.416873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.844 [2024-04-26 16:05:10.416883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.844 [2024-04-26 16:05:10.416896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.844 [2024-04-26 16:05:10.416910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.844 [2024-04-26 16:05:10.416923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.844 [2024-04-26 16:05:10.416934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.844 [2024-04-26 16:05:10.416946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.844 [2024-04-26 16:05:10.416956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.416968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.416978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.416990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.845 [2024-04-26 16:05:10.417774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.845 [2024-04-26 16:05:10.417785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.417983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.417993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.846 [2024-04-26 16:05:10.418247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.418258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001d440 is same with the state(5) to be set 00:22:30.846 [2024-04-26 16:05:10.423615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:30.846 [2024-04-26 16:05:10.423673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.846 [2024-04-26 16:05:10.423686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.846 [2024-04-26 16:05:10.423696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.846 [2024-04-26 16:05:10.423729] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:30.846 [2024-04-26 16:05:10.423769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.846 [2024-04-26 16:05:10.423785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.423798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.846 [2024-04-26 16:05:10.423809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.423819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.846 [2024-04-26 16:05:10.423830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.423841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.846 [2024-04-26 16:05:10.423852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.423863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000012c40 is same with the state(5) to be set 00:22:30.846 [2024-04-26 16:05:10.423885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000010e40 (9): Bad file descriptor 00:22:30.846 [2024-04-26 16:05:10.423914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000b440 (9): Bad file descriptor 00:22:30.846 [2024-04-26 16:05:10.423951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.846 [2024-04-26 16:05:10.423964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.423975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.846 [2024-04-26 16:05:10.423986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.846 [2024-04-26 16:05:10.423996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.846 [2024-04-26 16:05:10.424007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.424018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.847 [2024-04-26 16:05:10.424035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.424045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000014a40 is same with the state(5) to be set 00:22:30.847 [2024-04-26 16:05:10.425058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:30.847 [2024-04-26 16:05:10.425094] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.847 [2024-04-26 16:05:10.425476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.847 [2024-04-26 16:05:10.425823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.847 [2024-04-26 16:05:10.425838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000d240 with addr=10.0.0.2, port=4420 00:22:30.847 [2024-04-26 16:05:10.425851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000d240 is same with the state(5) to be set 00:22:30.847 [2024-04-26 16:05:10.426185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.847 [2024-04-26 16:05:10.426456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.847 [2024-04-26 16:05:10.426474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000f040 with addr=10.0.0.2, port=4420 00:22:30.847 [2024-04-26 16:05:10.426485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000f040 is same with the state(5) to be set 00:22:30.847 [2024-04-26 16:05:10.426868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.847 [2024-04-26 16:05:10.427154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.847 [2024-04-26 16:05:10.427168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000016840 with addr=10.0.0.2, port=4420 00:22:30.847 [2024-04-26 16:05:10.427178] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000016840 is same with the state(5) to be set 00:22:30.847 [2024-04-26 16:05:10.427226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.847 [2024-04-26 16:05:10.427879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.847 [2024-04-26 16:05:10.427889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.427901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.427911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.427923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.427932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.427945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.427955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.427968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.427980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.427992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.848 [2024-04-26 16:05:10.428679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.848 [2024-04-26 16:05:10.428689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001a440 is same with the state(5) to be set 00:22:30.848 [2024-04-26 16:05:10.430565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:30.848 [2024-04-26 16:05:10.430978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.848 [2024-04-26 16:05:10.431268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.849 [2024-04-26 16:05:10.431284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000009640 with addr=10.0.0.2, port=4420 00:22:30.849 [2024-04-26 16:05:10.431296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009640 is same with the state(5) to be set 00:22:30.849 [2024-04-26 16:05:10.431311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000d240 (9): Bad file descriptor 00:22:30.849 [2024-04-26 16:05:10.431326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000f040 (9): Bad file descriptor 00:22:30.849 [2024-04-26 16:05:10.431340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000016840 (9): Bad file descriptor 00:22:30.849 [2024-04-26 16:05:10.431452] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.849 [2024-04-26 16:05:10.431520] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.849 [2024-04-26 16:05:10.431897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.849 [2024-04-26 16:05:10.432244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.849 [2024-04-26 16:05:10.432260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007840 with addr=10.0.0.2, port=4420 00:22:30.849 [2024-04-26 16:05:10.432272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007840 is same with the state(5) to be set 00:22:30.849 [2024-04-26 16:05:10.432286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:22:30.849 [2024-04-26 16:05:10.432299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:30.849 [2024-04-26 16:05:10.432309] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:30.849 [2024-04-26 16:05:10.432320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:30.849 [2024-04-26 16:05:10.432337] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:30.849 [2024-04-26 16:05:10.432347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:30.849 [2024-04-26 16:05:10.432359] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:30.849 [2024-04-26 16:05:10.432374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:30.849 [2024-04-26 16:05:10.432384] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:30.849 [2024-04-26 16:05:10.432393] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:30.849 [2024-04-26 16:05:10.432851] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.849 [2024-04-26 16:05:10.432871] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.849 [2024-04-26 16:05:10.432879] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.849 [2024-04-26 16:05:10.432891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007840 (9): Bad file descriptor 00:22:30.849 [2024-04-26 16:05:10.432903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:30.849 [2024-04-26 16:05:10.432913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:30.849 [2024-04-26 16:05:10.432922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:30.849 [2024-04-26 16:05:10.432981] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.849 [2024-04-26 16:05:10.432992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:30.849 [2024-04-26 16:05:10.433001] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:30.849 [2024-04-26 16:05:10.433010] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:30.849 [2024-04-26 16:05:10.433052] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.849 [2024-04-26 16:05:10.433647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000012c40 (9): Bad file descriptor 00:22:30.849 [2024-04-26 16:05:10.433688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000014a40 (9): Bad file descriptor 00:22:30.849 [2024-04-26 16:05:10.433795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.433812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.433830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.433842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.433855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.433867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.433881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.433891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.433904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.433914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.433930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.433941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.433953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.433964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.433975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.433986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.433999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.849 [2024-04-26 16:05:10.434280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.849 [2024-04-26 16:05:10.434290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.434982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.434992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.435003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.435013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.435024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.435036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.435048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.435058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.435074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.850 [2024-04-26 16:05:10.435088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.850 [2024-04-26 16:05:10.435100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.435110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.435122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.435132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.435144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.435154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.435167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.435176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.435188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.435197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.435209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.435218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.435230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.435239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.435251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.435261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.435272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001b040 is same with the state(5) to be set 00:22:30.851 [2024-04-26 16:05:10.436596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.436985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.436996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.437008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.437017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.437029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.437038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.437050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.437060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.437076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.437087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.437098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.437109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.437120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.437131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.437142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.851 [2024-04-26 16:05:10.437151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.851 [2024-04-26 16:05:10.437163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.852 [2024-04-26 16:05:10.437926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.852 [2024-04-26 16:05:10.437936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.437948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.437958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.437969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.437978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.437988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001c240 is same with the state(5) to be set 00:22:30.853 [2024-04-26 16:05:10.439298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.853 [2024-04-26 16:05:10.439334] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:30.853 [2024-04-26 16:05:10.439345] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:30.853 [2024-04-26 16:05:10.439442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:30.853 [2024-04-26 16:05:10.439459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:30.853 [2024-04-26 16:05:10.439470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:30.853 [2024-04-26 16:05:10.439854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.440267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.440282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:22:30.853 [2024-04-26 16:05:10.440294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:22:30.853 [2024-04-26 16:05:10.440626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.440919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.440932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000b440 with addr=10.0.0.2, port=4420 00:22:30.853 [2024-04-26 16:05:10.440941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000b440 is same with the state(5) to be set 00:22:30.853 [2024-04-26 16:05:10.441304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.441628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.441641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010e40 with addr=10.0.0.2, port=4420 00:22:30.853 [2024-04-26 16:05:10.441650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010e40 is same with the state(5) to be set 00:22:30.853 [2024-04-26 16:05:10.442431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:30.853 [2024-04-26 16:05:10.442846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.443208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.443222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000016840 with addr=10.0.0.2, port=4420 00:22:30.853 [2024-04-26 16:05:10.443232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000016840 is same with the state(5) to be set 00:22:30.853 [2024-04-26 16:05:10.443554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.443908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.443921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000f040 with addr=10.0.0.2, port=4420 00:22:30.853 [2024-04-26 16:05:10.443930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000f040 is same with the state(5) to be set 00:22:30.853 [2024-04-26 16:05:10.444286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.444602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.444615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000d240 with addr=10.0.0.2, port=4420 00:22:30.853 [2024-04-26 16:05:10.444625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000d240 is same with the state(5) to be set 00:22:30.853 [2024-04-26 16:05:10.444638] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:22:30.853 [2024-04-26 16:05:10.444651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000b440 (9): Bad file descriptor 00:22:30.853 [2024-04-26 16:05:10.444664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000010e40 (9): Bad file descriptor 00:22:30.853 [2024-04-26 16:05:10.445163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.445499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.853 [2024-04-26 16:05:10.445514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000009640 with addr=10.0.0.2, port=4420 00:22:30.853 [2024-04-26 16:05:10.445524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009640 is same with the state(5) to be set 00:22:30.853 [2024-04-26 16:05:10.445538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000016840 (9): Bad file descriptor 00:22:30.853 [2024-04-26 16:05:10.445550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000f040 (9): Bad file descriptor 00:22:30.853 [2024-04-26 16:05:10.445562] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000d240 (9): Bad file descriptor 00:22:30.853 [2024-04-26 16:05:10.445572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.853 [2024-04-26 16:05:10.445581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.853 [2024-04-26 16:05:10.445590] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.853 [2024-04-26 16:05:10.445604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:30.853 [2024-04-26 16:05:10.445614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:30.853 [2024-04-26 16:05:10.445622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:30.853 [2024-04-26 16:05:10.445635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:30.853 [2024-04-26 16:05:10.445644] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:30.853 [2024-04-26 16:05:10.445653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:30.853 [2024-04-26 16:05:10.445726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:30.853 [2024-04-26 16:05:10.445741] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.853 [2024-04-26 16:05:10.445750] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.853 [2024-04-26 16:05:10.445759] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.853 [2024-04-26 16:05:10.445783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:22:30.853 [2024-04-26 16:05:10.445794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:30.853 [2024-04-26 16:05:10.445803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:30.853 [2024-04-26 16:05:10.445812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:30.853 [2024-04-26 16:05:10.445830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:30.853 [2024-04-26 16:05:10.445839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:30.853 [2024-04-26 16:05:10.445847] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:30.853 [2024-04-26 16:05:10.445861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:30.853 [2024-04-26 16:05:10.445870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:30.853 [2024-04-26 16:05:10.445878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:30.853 [2024-04-26 16:05:10.445940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.445957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.445974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.445985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.445998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.446008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.446021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.446037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.446049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.446059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.446077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.446088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.853 [2024-04-26 16:05:10.446101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.853 [2024-04-26 16:05:10.446111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.854 [2024-04-26 16:05:10.446912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.854 [2024-04-26 16:05:10.446921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.446933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.446944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.446956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.446966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.446978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.446989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.855 [2024-04-26 16:05:10.447415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.855 [2024-04-26 16:05:10.447426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400001c840 is same with the state(5) to be set 00:22:30.855 [2024-04-26 16:05:10.453185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:30.855 [2024-04-26 16:05:10.453214] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.855 [2024-04-26 16:05:10.453224] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.855 [2024-04-26 16:05:10.453233] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.855 task offset: 27264 on job bdev=Nvme1n1 fails 00:22:30.855 00:22:30.855 Latency(us) 00:22:30.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.855 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.855 Job: Nvme1n1 ended in about 0.90 seconds with error 00:22:30.855 Verification LBA range: start 0x0 length 0x400 00:22:30.855 Nvme1n1 : 0.90 213.38 13.34 71.13 0.00 222531.51 5299.87 244363.80 00:22:30.855 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.855 Job: Nvme2n1 ended in about 0.94 seconds with error 00:22:30.855 Verification LBA range: start 0x0 length 0x400 00:22:30.855 Nvme2n1 : 0.94 203.59 12.72 67.86 0.00 229040.53 22795.13 238892.97 00:22:30.855 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.855 Job: Nvme3n1 ended in about 0.92 seconds with error 00:22:30.855 Verification LBA range: start 0x0 length 0x400 00:22:30.855 Nvme3n1 : 0.92 209.13 13.07 69.71 0.00 218491.55 9573.95 224304.08 00:22:30.855 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.855 Job: Nvme4n1 ended in about 0.95 seconds with error 00:22:30.855 Verification LBA range: start 0x0 length 0x400 00:22:30.855 Nvme4n1 : 0.95 202.19 12.64 67.40 0.00 222123.63 21997.30 240716.58 00:22:30.855 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.855 Job: Nvme5n1 ended in about 0.93 seconds with error 00:22:30.855 Verification LBA range: start 0x0 length 0x400 00:22:30.855 Nvme5n1 : 0.93 206.72 12.92 68.91 0.00 212626.14 6496.61 248011.02 00:22:30.855 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.855 Job: Nvme6n1 ended in about 0.93 seconds with error 00:22:30.855 Verification LBA range: start 0x0 length 0x400 00:22:30.855 Nvme6n1 : 0.93 137.67 8.60 68.83 0.00 278249.66 6582.09 331897.10 00:22:30.855 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.855 Job: Nvme7n1 ended in about 0.95 seconds with error 00:22:30.855 Verification LBA range: start 0x0 length 0x400 00:22:30.855 Nvme7n1 : 0.95 134.41 8.40 67.20 0.00 280045.30 41031.23 273541.57 00:22:30.855 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.856 Job: Nvme8n1 ended in about 0.96 seconds with error 00:22:30.856 Verification LBA range: start 0x0 length 0x400 00:22:30.856 Nvme8n1 : 0.96 133.09 8.32 66.54 0.00 277541.99 23592.96 246187.41 00:22:30.856 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.856 Verification LBA range: start 0x0 length 0x400 00:22:30.856 Nvme9n1 : 0.92 209.38 13.09 0.00 0.00 255930.84 22567.18 232510.33 00:22:30.856 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:30.856 Job: Nvme10n1 ended in about 0.93 seconds with error 00:22:30.856 Verification LBA range: start 0x0 length 0x400 00:22:30.856 Nvme10n1 : 0.93 137.24 8.58 68.62 0.00 256663.67 22909.11 271717.95 00:22:30.856 =================================================================================================================== 00:22:30.856 Total : 1786.80 111.67 616.21 0.00 241844.22 5299.87 331897.10 00:22:31.116 [2024-04-26 16:05:10.543442] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:31.116 [2024-04-26 16:05:10.543502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:31.116 [2024-04-26 16:05:10.544047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.544433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.544453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007840 with addr=10.0.0.2, port=4420 00:22:31.116 [2024-04-26 16:05:10.544467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007840 is same with the state(5) to be set 00:22:31.116 [2024-04-26 16:05:10.544481] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:31.116 [2024-04-26 16:05:10.544491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:31.116 [2024-04-26 16:05:10.544503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:31.116 [2024-04-26 16:05:10.544591] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.116 [2024-04-26 16:05:10.544691] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.116 [2024-04-26 16:05:10.545128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.545419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.545434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000014a40 with addr=10.0.0.2, port=4420 00:22:31.116 [2024-04-26 16:05:10.545446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000014a40 is same with the state(5) to be set 00:22:31.116 [2024-04-26 16:05:10.545893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.546186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.546202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000012c40 with addr=10.0.0.2, port=4420 00:22:31.116 [2024-04-26 16:05:10.546213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000012c40 is same with the state(5) to be set 00:22:31.116 [2024-04-26 16:05:10.546232] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007840 (9): Bad file descriptor 00:22:31.116 [2024-04-26 16:05:10.546254] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.116 [2024-04-26 16:05:10.546277] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.116 [2024-04-26 16:05:10.546291] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.116 [2024-04-26 16:05:10.546304] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.116 [2024-04-26 16:05:10.546315] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.116 [2024-04-26 16:05:10.546329] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.116 [2024-04-26 16:05:10.546340] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.116 [2024-04-26 16:05:10.546776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:31.116 [2024-04-26 16:05:10.546801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:31.116 [2024-04-26 16:05:10.546817] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:31.116 [2024-04-26 16:05:10.546828] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:31.116 [2024-04-26 16:05:10.546843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:31.116 [2024-04-26 16:05:10.546854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:31.116 [2024-04-26 16:05:10.546933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000014a40 (9): Bad file descriptor 00:22:31.116 [2024-04-26 16:05:10.546950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000012c40 (9): Bad file descriptor 00:22:31.116 [2024-04-26 16:05:10.546961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:31.116 [2024-04-26 16:05:10.546970] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:31.116 [2024-04-26 16:05:10.546981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:31.116 [2024-04-26 16:05:10.547050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:31.116 [2024-04-26 16:05:10.547064] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.116 [2024-04-26 16:05:10.547504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.547868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.547883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010e40 with addr=10.0.0.2, port=4420 00:22:31.116 [2024-04-26 16:05:10.547894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010e40 is same with the state(5) to be set 00:22:31.116 [2024-04-26 16:05:10.548229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.548528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.548542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000b440 with addr=10.0.0.2, port=4420 00:22:31.116 [2024-04-26 16:05:10.548553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000b440 is same with the state(5) to be set 00:22:31.116 [2024-04-26 16:05:10.548890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.549308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.549322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:22:31.116 [2024-04-26 16:05:10.549333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:22:31.116 [2024-04-26 16:05:10.549718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.550033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.550049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000d240 with addr=10.0.0.2, port=4420 00:22:31.116 [2024-04-26 16:05:10.550060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000d240 is same with the state(5) to be set 00:22:31.116 [2024-04-26 16:05:10.550416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.116 [2024-04-26 16:05:10.550778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.117 [2024-04-26 16:05:10.550793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61400000f040 with addr=10.0.0.2, port=4420 00:22:31.117 [2024-04-26 16:05:10.550803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61400000f040 is same with the state(5) to be set 00:22:31.117 [2024-04-26 16:05:10.551150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.117 [2024-04-26 16:05:10.551434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.117 [2024-04-26 16:05:10.551449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000016840 with addr=10.0.0.2, port=4420 00:22:31.117 [2024-04-26 16:05:10.551459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000016840 is same with the state(5) to be set 00:22:31.117 [2024-04-26 16:05:10.551470] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.551479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.551488] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:31.117 [2024-04-26 16:05:10.551502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.551512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.551522] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:31.117 [2024-04-26 16:05:10.551579] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.117 [2024-04-26 16:05:10.551590] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.117 [2024-04-26 16:05:10.551942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.117 [2024-04-26 16:05:10.552393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.117 [2024-04-26 16:05:10.552408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000009640 with addr=10.0.0.2, port=4420 00:22:31.117 [2024-04-26 16:05:10.552419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009640 is same with the state(5) to be set 00:22:31.117 [2024-04-26 16:05:10.552433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000010e40 (9): Bad file descriptor 00:22:31.117 [2024-04-26 16:05:10.552446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000b440 (9): Bad file descriptor 00:22:31.117 [2024-04-26 16:05:10.552459] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:22:31.117 [2024-04-26 16:05:10.552471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000d240 (9): Bad file descriptor 00:22:31.117 [2024-04-26 16:05:10.552484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61400000f040 (9): Bad file descriptor 00:22:31.117 [2024-04-26 16:05:10.552496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000016840 (9): Bad file descriptor 00:22:31.117 [2024-04-26 16:05:10.552549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:22:31.117 [2024-04-26 16:05:10.552564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.552573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.552582] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:31.117 [2024-04-26 16:05:10.552597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.552605] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.552614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:31.117 [2024-04-26 16:05:10.552627] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.552640] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.552649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.117 [2024-04-26 16:05:10.552662] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.552670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.552678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:31.117 [2024-04-26 16:05:10.552691] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.552700] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.552709] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:31.117 [2024-04-26 16:05:10.552721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.552729] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.552739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:31.117 [2024-04-26 16:05:10.552778] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.117 [2024-04-26 16:05:10.552788] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.117 [2024-04-26 16:05:10.552797] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.117 [2024-04-26 16:05:10.552806] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.117 [2024-04-26 16:05:10.552814] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.117 [2024-04-26 16:05:10.552822] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.117 [2024-04-26 16:05:10.552830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:31.117 [2024-04-26 16:05:10.552839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:31.117 [2024-04-26 16:05:10.552848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:31.117 [2024-04-26 16:05:10.552885] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.409 16:05:13 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:34.409 16:05:13 -- target/shutdown.sh@139 -- # sleep 1 00:22:34.978 16:05:14 -- target/shutdown.sh@142 -- # kill -9 2513712 00:22:34.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2513712) - No such process 00:22:34.978 16:05:14 -- target/shutdown.sh@142 -- # true 00:22:34.978 16:05:14 -- target/shutdown.sh@144 -- # stoptarget 00:22:34.978 16:05:14 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:34.978 16:05:14 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:34.978 16:05:14 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.978 16:05:14 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:34.978 16:05:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:34.978 16:05:14 -- nvmf/common.sh@117 -- # sync 00:22:34.978 16:05:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:34.978 16:05:14 -- nvmf/common.sh@120 -- # set +e 00:22:34.978 16:05:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:34.978 16:05:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.237 rmmod nvme_tcp 00:22:35.237 rmmod nvme_fabrics 00:22:35.237 rmmod nvme_keyring 00:22:35.237 16:05:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.237 16:05:14 -- nvmf/common.sh@124 -- # set -e 00:22:35.237 16:05:14 -- nvmf/common.sh@125 -- # return 0 00:22:35.237 16:05:14 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:22:35.238 16:05:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:35.238 16:05:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:35.238 16:05:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:35.238 16:05:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.238 16:05:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.238 16:05:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.238 16:05:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.238 16:05:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.145 16:05:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.145 00:22:37.145 real 0m12.353s 00:22:37.145 user 0m36.724s 00:22:37.145 sys 0m1.675s 00:22:37.145 16:05:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:37.145 16:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:37.145 ************************************ 00:22:37.145 END TEST nvmf_shutdown_tc3 00:22:37.145 ************************************ 00:22:37.145 16:05:16 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:37.145 00:22:37.145 real 0m47.362s 00:22:37.145 user 2m21.762s 00:22:37.145 sys 0m9.452s 00:22:37.145 16:05:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:37.145 16:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:37.145 ************************************ 00:22:37.145 END TEST nvmf_shutdown 00:22:37.145 ************************************ 00:22:37.403 16:05:16 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:22:37.403 16:05:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:37.403 16:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:37.403 16:05:16 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:22:37.403 16:05:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:37.403 16:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:37.403 16:05:16 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:22:37.403 16:05:16 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:37.403 16:05:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:37.403 16:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.403 16:05:16 -- common/autotest_common.sh@10 -- # set +x 00:22:37.403 ************************************ 00:22:37.403 START TEST nvmf_multicontroller 00:22:37.403 ************************************ 00:22:37.403 16:05:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:37.403 * Looking for test storage... 00:22:37.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.403 16:05:17 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.403 16:05:17 -- nvmf/common.sh@7 -- # uname -s 00:22:37.403 16:05:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.403 16:05:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.403 16:05:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.403 16:05:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.662 16:05:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.662 16:05:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.662 16:05:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.662 16:05:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.662 16:05:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.662 16:05:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.662 16:05:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.662 16:05:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.662 16:05:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.662 16:05:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.662 16:05:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.662 16:05:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.662 16:05:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.662 16:05:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.662 16:05:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.662 16:05:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.662 16:05:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.662 16:05:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.662 16:05:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.662 16:05:17 -- paths/export.sh@5 -- # export PATH 00:22:37.662 16:05:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.662 16:05:17 -- nvmf/common.sh@47 -- # : 0 00:22:37.662 16:05:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.662 16:05:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.662 16:05:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.662 16:05:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.662 16:05:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.662 16:05:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.662 16:05:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.662 16:05:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.662 16:05:17 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.662 16:05:17 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.662 16:05:17 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:37.662 16:05:17 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:37.662 16:05:17 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.662 16:05:17 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:37.662 16:05:17 -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:37.662 16:05:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:37.663 16:05:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.663 16:05:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:37.663 16:05:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:37.663 16:05:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:37.663 16:05:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.663 16:05:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.663 16:05:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.663 16:05:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:37.663 16:05:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:37.663 16:05:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.663 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:22:43.061 16:05:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:43.061 16:05:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.061 16:05:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.061 16:05:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.061 16:05:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.061 16:05:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.061 16:05:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.061 16:05:22 -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.061 16:05:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.061 16:05:22 -- nvmf/common.sh@296 -- # e810=() 00:22:43.061 16:05:22 -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.061 16:05:22 -- nvmf/common.sh@297 -- # x722=() 00:22:43.061 16:05:22 -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.061 16:05:22 -- nvmf/common.sh@298 -- # mlx=() 00:22:43.061 16:05:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.061 16:05:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.061 16:05:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.061 16:05:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:43.061 16:05:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.061 16:05:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.061 16:05:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:43.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:43.061 16:05:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.061 16:05:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:43.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:43.061 16:05:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.061 16:05:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.061 16:05:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.061 16:05:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:43.061 16:05:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.061 16:05:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:43.061 Found net devices under 0000:86:00.0: cvl_0_0 00:22:43.061 16:05:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.061 16:05:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.061 16:05:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.061 16:05:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:43.061 16:05:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.061 16:05:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:43.061 Found net devices under 0000:86:00.1: cvl_0_1 00:22:43.061 16:05:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.061 16:05:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:43.061 16:05:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:43.061 16:05:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:43.061 16:05:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.061 16:05:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.061 16:05:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.061 16:05:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:43.061 16:05:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.061 16:05:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.061 16:05:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:43.061 16:05:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.061 16:05:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.061 16:05:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:43.061 16:05:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:43.061 16:05:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.061 16:05:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.061 16:05:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.061 16:05:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.061 16:05:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:43.061 16:05:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.061 16:05:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.061 16:05:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.061 16:05:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:43.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:22:43.061 00:22:43.061 --- 10.0.0.2 ping statistics --- 00:22:43.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.061 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:22:43.061 16:05:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:22:43.061 00:22:43.061 --- 10.0.0.1 ping statistics --- 00:22:43.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.061 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:43.061 16:05:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.061 16:05:22 -- nvmf/common.sh@411 -- # return 0 00:22:43.061 16:05:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:43.061 16:05:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.061 16:05:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:43.061 16:05:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.061 16:05:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:43.061 16:05:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:43.061 16:05:22 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:43.061 16:05:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:43.061 16:05:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:43.061 16:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:43.061 16:05:22 -- nvmf/common.sh@470 -- # nvmfpid=2518451 00:22:43.061 16:05:22 -- nvmf/common.sh@471 -- # waitforlisten 2518451 00:22:43.061 16:05:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:43.061 16:05:22 -- common/autotest_common.sh@817 -- # '[' -z 2518451 ']' 00:22:43.061 16:05:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.061 16:05:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:43.062 16:05:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.062 16:05:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:43.062 16:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:43.062 [2024-04-26 16:05:22.497815] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:43.062 [2024-04-26 16:05:22.497922] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.062 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.062 [2024-04-26 16:05:22.607998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:43.322 [2024-04-26 16:05:22.826427] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.322 [2024-04-26 16:05:22.826470] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.322 [2024-04-26 16:05:22.826480] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.322 [2024-04-26 16:05:22.826490] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.322 [2024-04-26 16:05:22.826499] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.322 [2024-04-26 16:05:22.826624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.322 [2024-04-26 16:05:22.826685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.322 [2024-04-26 16:05:22.826692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.889 16:05:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:43.890 16:05:23 -- common/autotest_common.sh@850 -- # return 0 00:22:43.890 16:05:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:43.890 16:05:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 16:05:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.890 16:05:23 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 [2024-04-26 16:05:23.308029] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 Malloc0 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 [2024-04-26 16:05:23.443160] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 [2024-04-26 16:05:23.451101] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 Malloc1 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.890 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:43.890 16:05:23 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:43.890 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:43.890 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:44.149 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.149 16:05:23 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:44.149 16:05:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.149 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:44.149 16:05:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.149 16:05:23 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:44.149 16:05:23 -- host/multicontroller.sh@44 -- # bdevperf_pid=2518657 00:22:44.149 16:05:23 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.149 16:05:23 -- host/multicontroller.sh@47 -- # waitforlisten 2518657 /var/tmp/bdevperf.sock 00:22:44.149 16:05:23 -- common/autotest_common.sh@817 -- # '[' -z 2518657 ']' 00:22:44.149 16:05:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.149 16:05:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:44.149 16:05:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.149 16:05:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:44.149 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:45.086 16:05:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:45.086 16:05:24 -- common/autotest_common.sh@850 -- # return 0 00:22:45.086 16:05:24 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:45.086 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.086 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.086 NVMe0n1 00:22:45.086 16:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.086 16:05:24 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.086 16:05:24 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:45.086 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.086 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.086 16:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.086 1 00:22:45.086 16:05:24 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:45.086 16:05:24 -- common/autotest_common.sh@638 -- # local es=0 00:22:45.086 16:05:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:45.086 16:05:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:45.086 16:05:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.086 16:05:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:45.086 16:05:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.086 16:05:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:45.086 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.086 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.086 request: 00:22:45.086 { 00:22:45.086 "name": "NVMe0", 00:22:45.086 "trtype": "tcp", 00:22:45.086 "traddr": "10.0.0.2", 00:22:45.086 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:45.086 "hostaddr": "10.0.0.2", 00:22:45.086 "hostsvcid": "60000", 00:22:45.086 "adrfam": "ipv4", 00:22:45.086 "trsvcid": "4420", 00:22:45.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.086 "method": "bdev_nvme_attach_controller", 00:22:45.086 "req_id": 1 00:22:45.086 } 00:22:45.086 Got JSON-RPC error response 00:22:45.086 response: 00:22:45.086 { 00:22:45.086 "code": -114, 00:22:45.086 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:45.086 } 00:22:45.086 16:05:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:45.086 16:05:24 -- common/autotest_common.sh@641 -- # es=1 00:22:45.086 16:05:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:45.086 16:05:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:45.086 16:05:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:45.086 16:05:24 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:45.086 16:05:24 -- common/autotest_common.sh@638 -- # local es=0 00:22:45.086 16:05:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:45.086 16:05:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.087 16:05:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:45.087 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.087 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.087 request: 00:22:45.087 { 00:22:45.087 "name": "NVMe0", 00:22:45.087 "trtype": "tcp", 00:22:45.087 "traddr": "10.0.0.2", 00:22:45.087 "hostaddr": "10.0.0.2", 00:22:45.087 "hostsvcid": "60000", 00:22:45.087 "adrfam": "ipv4", 00:22:45.087 "trsvcid": "4420", 00:22:45.087 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.087 "method": "bdev_nvme_attach_controller", 00:22:45.087 "req_id": 1 00:22:45.087 } 00:22:45.087 Got JSON-RPC error response 00:22:45.087 response: 00:22:45.087 { 00:22:45.087 "code": -114, 00:22:45.087 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:45.087 } 00:22:45.087 16:05:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:45.087 16:05:24 -- common/autotest_common.sh@641 -- # es=1 00:22:45.087 16:05:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:45.087 16:05:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:45.087 16:05:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:45.087 16:05:24 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:45.087 16:05:24 -- common/autotest_common.sh@638 -- # local es=0 00:22:45.087 16:05:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:45.087 16:05:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.087 16:05:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:45.087 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.087 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.087 request: 00:22:45.087 { 00:22:45.087 "name": "NVMe0", 00:22:45.087 "trtype": "tcp", 00:22:45.087 "traddr": "10.0.0.2", 00:22:45.087 "hostaddr": "10.0.0.2", 00:22:45.087 "hostsvcid": "60000", 00:22:45.087 "adrfam": "ipv4", 00:22:45.087 "trsvcid": "4420", 00:22:45.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.087 "multipath": "disable", 00:22:45.087 "method": "bdev_nvme_attach_controller", 00:22:45.087 "req_id": 1 00:22:45.087 } 00:22:45.087 Got JSON-RPC error response 00:22:45.087 response: 00:22:45.087 { 00:22:45.087 "code": -114, 00:22:45.087 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:45.087 } 00:22:45.087 16:05:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:45.087 16:05:24 -- common/autotest_common.sh@641 -- # es=1 00:22:45.087 16:05:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:45.087 16:05:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:45.087 16:05:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:45.087 16:05:24 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:45.087 16:05:24 -- common/autotest_common.sh@638 -- # local es=0 00:22:45.087 16:05:24 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:45.087 16:05:24 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:45.087 16:05:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.087 16:05:24 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:45.087 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.087 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.087 request: 00:22:45.087 { 00:22:45.087 "name": "NVMe0", 00:22:45.087 "trtype": "tcp", 00:22:45.087 "traddr": "10.0.0.2", 00:22:45.087 "hostaddr": "10.0.0.2", 00:22:45.087 "hostsvcid": "60000", 00:22:45.087 "adrfam": "ipv4", 00:22:45.087 "trsvcid": "4420", 00:22:45.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.087 "multipath": "failover", 00:22:45.087 "method": "bdev_nvme_attach_controller", 00:22:45.087 "req_id": 1 00:22:45.087 } 00:22:45.087 Got JSON-RPC error response 00:22:45.087 response: 00:22:45.087 { 00:22:45.087 "code": -114, 00:22:45.087 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:45.087 } 00:22:45.087 16:05:24 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:45.087 16:05:24 -- common/autotest_common.sh@641 -- # es=1 00:22:45.087 16:05:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:45.087 16:05:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:45.087 16:05:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:45.087 16:05:24 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.087 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.087 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.347 00:22:45.347 16:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.347 16:05:24 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.347 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.347 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.347 16:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.347 16:05:24 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:45.347 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.347 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.347 00:22:45.347 16:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.347 16:05:24 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.347 16:05:24 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:45.347 16:05:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:45.347 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.347 16:05:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:45.347 16:05:24 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:45.347 16:05:24 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.725 0 00:22:46.725 16:05:26 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:46.725 16:05:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.725 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:22:46.725 16:05:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.725 16:05:26 -- host/multicontroller.sh@100 -- # killprocess 2518657 00:22:46.725 16:05:26 -- common/autotest_common.sh@936 -- # '[' -z 2518657 ']' 00:22:46.725 16:05:26 -- common/autotest_common.sh@940 -- # kill -0 2518657 00:22:46.725 16:05:26 -- common/autotest_common.sh@941 -- # uname 00:22:46.725 16:05:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.725 16:05:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2518657 00:22:46.725 16:05:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:46.725 16:05:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:46.725 16:05:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2518657' 00:22:46.725 killing process with pid 2518657 00:22:46.725 16:05:26 -- common/autotest_common.sh@955 -- # kill 2518657 00:22:46.725 16:05:26 -- common/autotest_common.sh@960 -- # wait 2518657 00:22:47.663 16:05:27 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.663 16:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.663 16:05:27 -- common/autotest_common.sh@10 -- # set +x 00:22:47.663 16:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.663 16:05:27 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:47.663 16:05:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.663 16:05:27 -- common/autotest_common.sh@10 -- # set +x 00:22:47.663 16:05:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.663 16:05:27 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:47.663 16:05:27 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.663 16:05:27 -- common/autotest_common.sh@1598 -- # read -r file 00:22:47.663 16:05:27 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:47.663 16:05:27 -- common/autotest_common.sh@1597 -- # sort -u 00:22:47.663 16:05:27 -- common/autotest_common.sh@1599 -- # cat 00:22:47.663 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:47.663 [2024-04-26 16:05:23.639388] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:47.663 [2024-04-26 16:05:23.639482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518657 ] 00:22:47.663 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.663 [2024-04-26 16:05:23.739404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.663 [2024-04-26 16:05:23.983297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.663 [2024-04-26 16:05:24.960077] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 80c6e034-b843-4f2d-bb08-b7e993abcc49 already exists 00:22:47.663 [2024-04-26 16:05:24.960119] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:80c6e034-b843-4f2d-bb08-b7e993abcc49 alias for bdev NVMe1n1 00:22:47.663 [2024-04-26 16:05:24.960135] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:47.663 Running I/O for 1 seconds... 00:22:47.663 00:22:47.663 Latency(us) 00:22:47.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.663 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:47.663 NVMe0n1 : 1.01 19083.52 74.55 0.00 0.00 6682.46 5955.23 24390.79 00:22:47.663 =================================================================================================================== 00:22:47.663 Total : 19083.52 74.55 0.00 0.00 6682.46 5955.23 24390.79 00:22:47.663 Received shutdown signal, test time was about 1.000000 seconds 00:22:47.663 00:22:47.663 Latency(us) 00:22:47.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.663 =================================================================================================================== 00:22:47.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.663 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:47.663 16:05:27 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.663 16:05:27 -- common/autotest_common.sh@1598 -- # read -r file 00:22:47.663 16:05:27 -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:47.663 16:05:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:47.663 16:05:27 -- nvmf/common.sh@117 -- # sync 00:22:47.663 16:05:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:47.663 16:05:27 -- nvmf/common.sh@120 -- # set +e 00:22:47.663 16:05:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.663 16:05:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:47.663 rmmod nvme_tcp 00:22:47.663 rmmod nvme_fabrics 00:22:47.663 rmmod nvme_keyring 00:22:47.663 16:05:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.663 16:05:27 -- nvmf/common.sh@124 -- # set -e 00:22:47.663 16:05:27 -- nvmf/common.sh@125 -- # return 0 00:22:47.663 16:05:27 -- nvmf/common.sh@478 -- # '[' -n 2518451 ']' 00:22:47.663 16:05:27 -- nvmf/common.sh@479 -- # killprocess 2518451 00:22:47.663 16:05:27 -- common/autotest_common.sh@936 -- # '[' -z 2518451 ']' 00:22:47.663 16:05:27 -- common/autotest_common.sh@940 -- # kill -0 2518451 00:22:47.663 16:05:27 -- common/autotest_common.sh@941 -- # uname 00:22:47.663 16:05:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.663 16:05:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2518451 00:22:47.663 16:05:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:47.663 16:05:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:47.663 16:05:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2518451' 00:22:47.663 killing process with pid 2518451 00:22:47.663 16:05:27 -- common/autotest_common.sh@955 -- # kill 2518451 00:22:47.663 16:05:27 -- common/autotest_common.sh@960 -- # wait 2518451 00:22:49.568 16:05:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:49.568 16:05:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:49.568 16:05:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:49.568 16:05:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.568 16:05:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:49.568 16:05:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.568 16:05:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.568 16:05:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.478 16:05:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:51.478 00:22:51.478 real 0m14.057s 00:22:51.478 user 0m22.938s 00:22:51.478 sys 0m4.898s 00:22:51.478 16:05:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:51.478 16:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:51.478 ************************************ 00:22:51.478 END TEST nvmf_multicontroller 00:22:51.478 ************************************ 00:22:51.478 16:05:31 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:51.478 16:05:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:51.478 16:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.478 16:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:51.736 ************************************ 00:22:51.736 START TEST nvmf_aer 00:22:51.736 ************************************ 00:22:51.736 16:05:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:51.736 * Looking for test storage... 00:22:51.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:51.736 16:05:31 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.736 16:05:31 -- nvmf/common.sh@7 -- # uname -s 00:22:51.736 16:05:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.736 16:05:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.736 16:05:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.736 16:05:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.736 16:05:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.736 16:05:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.736 16:05:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.736 16:05:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.736 16:05:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.736 16:05:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.736 16:05:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:51.736 16:05:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:51.736 16:05:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.736 16:05:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.736 16:05:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.736 16:05:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.736 16:05:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.736 16:05:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.736 16:05:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.736 16:05:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.736 16:05:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.736 16:05:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.737 16:05:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.737 16:05:31 -- paths/export.sh@5 -- # export PATH 00:22:51.737 16:05:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.737 16:05:31 -- nvmf/common.sh@47 -- # : 0 00:22:51.737 16:05:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.737 16:05:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.737 16:05:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.737 16:05:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.737 16:05:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.737 16:05:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.737 16:05:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.737 16:05:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.737 16:05:31 -- host/aer.sh@11 -- # nvmftestinit 00:22:51.737 16:05:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:51.737 16:05:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.737 16:05:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:51.737 16:05:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:51.737 16:05:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:51.737 16:05:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.737 16:05:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.737 16:05:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.737 16:05:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:51.737 16:05:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:51.737 16:05:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.737 16:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:57.012 16:05:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:57.012 16:05:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.012 16:05:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.012 16:05:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.012 16:05:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.012 16:05:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.012 16:05:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.012 16:05:36 -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.012 16:05:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.012 16:05:36 -- nvmf/common.sh@296 -- # e810=() 00:22:57.012 16:05:36 -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.012 16:05:36 -- nvmf/common.sh@297 -- # x722=() 00:22:57.012 16:05:36 -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.012 16:05:36 -- nvmf/common.sh@298 -- # mlx=() 00:22:57.012 16:05:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.012 16:05:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.012 16:05:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.012 16:05:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.012 16:05:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.012 16:05:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.012 16:05:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.012 16:05:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.013 16:05:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.013 16:05:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.013 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.013 16:05:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.013 16:05:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.013 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.013 16:05:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.013 16:05:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.013 16:05:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.013 16:05:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:57.013 16:05:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.013 16:05:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.013 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.013 16:05:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.013 16:05:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.013 16:05:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.013 16:05:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:57.013 16:05:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.013 16:05:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.013 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.013 16:05:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.013 16:05:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:57.013 16:05:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:57.013 16:05:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:57.013 16:05:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.013 16:05:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.013 16:05:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.013 16:05:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:57.013 16:05:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.013 16:05:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.013 16:05:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:57.013 16:05:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.013 16:05:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.013 16:05:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:57.013 16:05:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:57.013 16:05:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.013 16:05:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.013 16:05:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.013 16:05:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.013 16:05:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:57.013 16:05:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.013 16:05:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.013 16:05:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.013 16:05:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:57.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:22:57.013 00:22:57.013 --- 10.0.0.2 ping statistics --- 00:22:57.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.013 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:57.013 16:05:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:22:57.013 00:22:57.013 --- 10.0.0.1 ping statistics --- 00:22:57.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.013 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:22:57.013 16:05:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.013 16:05:36 -- nvmf/common.sh@411 -- # return 0 00:22:57.013 16:05:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:57.013 16:05:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.013 16:05:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:57.013 16:05:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.013 16:05:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:57.013 16:05:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:57.013 16:05:36 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:57.013 16:05:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:57.013 16:05:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:57.013 16:05:36 -- common/autotest_common.sh@10 -- # set +x 00:22:57.013 16:05:36 -- nvmf/common.sh@470 -- # nvmfpid=2522861 00:22:57.013 16:05:36 -- nvmf/common.sh@471 -- # waitforlisten 2522861 00:22:57.013 16:05:36 -- common/autotest_common.sh@817 -- # '[' -z 2522861 ']' 00:22:57.013 16:05:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.013 16:05:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:57.013 16:05:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.013 16:05:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:57.013 16:05:36 -- common/autotest_common.sh@10 -- # set +x 00:22:57.013 16:05:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.013 [2024-04-26 16:05:36.413232] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:57.013 [2024-04-26 16:05:36.413327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.013 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.013 [2024-04-26 16:05:36.521847] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.272 [2024-04-26 16:05:36.737944] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.272 [2024-04-26 16:05:36.737988] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.272 [2024-04-26 16:05:36.737998] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.272 [2024-04-26 16:05:36.738024] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.272 [2024-04-26 16:05:36.738033] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.272 [2024-04-26 16:05:36.738156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.272 [2024-04-26 16:05:36.738187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.272 [2024-04-26 16:05:36.738251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.272 [2024-04-26 16:05:36.738259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.531 16:05:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:57.531 16:05:37 -- common/autotest_common.sh@850 -- # return 0 00:22:57.531 16:05:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:57.531 16:05:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:57.531 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:57.531 16:05:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.531 16:05:37 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.531 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.531 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:57.790 [2024-04-26 16:05:37.216184] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.790 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.790 16:05:37 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:57.790 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.790 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:57.790 Malloc0 00:22:57.790 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.790 16:05:37 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:57.790 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.790 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:57.790 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.790 16:05:37 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.790 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.790 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:57.790 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.790 16:05:37 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.790 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.790 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:57.790 [2024-04-26 16:05:37.333704] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.790 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.790 16:05:37 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:57.790 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.790 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:57.790 [2024-04-26 16:05:37.341441] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:57.790 [ 00:22:57.790 { 00:22:57.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:57.790 "subtype": "Discovery", 00:22:57.790 "listen_addresses": [], 00:22:57.790 "allow_any_host": true, 00:22:57.790 "hosts": [] 00:22:57.790 }, 00:22:57.790 { 00:22:57.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.790 "subtype": "NVMe", 00:22:57.790 "listen_addresses": [ 00:22:57.790 { 00:22:57.790 "transport": "TCP", 00:22:57.790 "trtype": "TCP", 00:22:57.790 "adrfam": "IPv4", 00:22:57.790 "traddr": "10.0.0.2", 00:22:57.790 "trsvcid": "4420" 00:22:57.790 } 00:22:57.790 ], 00:22:57.790 "allow_any_host": true, 00:22:57.790 "hosts": [], 00:22:57.790 "serial_number": "SPDK00000000000001", 00:22:57.790 "model_number": "SPDK bdev Controller", 00:22:57.790 "max_namespaces": 2, 00:22:57.790 "min_cntlid": 1, 00:22:57.790 "max_cntlid": 65519, 00:22:57.790 "namespaces": [ 00:22:57.790 { 00:22:57.790 "nsid": 1, 00:22:57.790 "bdev_name": "Malloc0", 00:22:57.790 "name": "Malloc0", 00:22:57.790 "nguid": "2E40F4D15C2A42F2BDE0A1735A316867", 00:22:57.790 "uuid": "2e40f4d1-5c2a-42f2-bde0-a1735a316867" 00:22:57.790 } 00:22:57.790 ] 00:22:57.790 } 00:22:57.790 ] 00:22:57.790 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.790 16:05:37 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:57.790 16:05:37 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:57.790 16:05:37 -- host/aer.sh@33 -- # aerpid=2522976 00:22:57.790 16:05:37 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:57.790 16:05:37 -- common/autotest_common.sh@1251 -- # local i=0 00:22:57.790 16:05:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:57.790 16:05:37 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:22:57.790 16:05:37 -- common/autotest_common.sh@1254 -- # i=1 00:22:57.790 16:05:37 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:22:57.790 16:05:37 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:57.790 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.790 16:05:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:57.790 16:05:37 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:22:57.790 16:05:37 -- common/autotest_common.sh@1254 -- # i=2 00:22:57.790 16:05:37 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:22:58.049 16:05:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:58.049 16:05:37 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:58.049 16:05:37 -- common/autotest_common.sh@1262 -- # return 0 00:22:58.049 16:05:37 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:58.049 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.049 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:58.309 Malloc1 00:22:58.309 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.309 16:05:37 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:58.309 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.309 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:58.309 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.309 16:05:37 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:58.309 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.309 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:58.309 [ 00:22:58.309 { 00:22:58.309 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:58.309 "subtype": "Discovery", 00:22:58.309 "listen_addresses": [], 00:22:58.309 "allow_any_host": true, 00:22:58.309 "hosts": [] 00:22:58.309 }, 00:22:58.309 { 00:22:58.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.309 "subtype": "NVMe", 00:22:58.309 "listen_addresses": [ 00:22:58.309 { 00:22:58.309 "transport": "TCP", 00:22:58.309 "trtype": "TCP", 00:22:58.309 "adrfam": "IPv4", 00:22:58.309 "traddr": "10.0.0.2", 00:22:58.309 "trsvcid": "4420" 00:22:58.309 } 00:22:58.309 ], 00:22:58.309 "allow_any_host": true, 00:22:58.309 "hosts": [], 00:22:58.309 "serial_number": "SPDK00000000000001", 00:22:58.309 "model_number": "SPDK bdev Controller", 00:22:58.309 "max_namespaces": 2, 00:22:58.309 "min_cntlid": 1, 00:22:58.309 "max_cntlid": 65519, 00:22:58.309 "namespaces": [ 00:22:58.309 { 00:22:58.309 "nsid": 1, 00:22:58.309 "bdev_name": "Malloc0", 00:22:58.309 "name": "Malloc0", 00:22:58.309 "nguid": "2E40F4D15C2A42F2BDE0A1735A316867", 00:22:58.309 "uuid": "2e40f4d1-5c2a-42f2-bde0-a1735a316867" 00:22:58.309 }, 00:22:58.309 { 00:22:58.309 "nsid": 2, 00:22:58.309 "bdev_name": "Malloc1", 00:22:58.309 "name": "Malloc1", 00:22:58.309 "nguid": "1BC3506D6EAB42A0ACE5290A08EF3F59", 00:22:58.309 "uuid": "1bc3506d-6eab-42a0-ace5-290a08ef3f59" 00:22:58.309 } 00:22:58.309 ] 00:22:58.309 } 00:22:58.309 ] 00:22:58.309 16:05:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.309 16:05:37 -- host/aer.sh@43 -- # wait 2522976 00:22:58.309 Asynchronous Event Request test 00:22:58.309 Attaching to 10.0.0.2 00:22:58.309 Attached to 10.0.0.2 00:22:58.309 Registering asynchronous event callbacks... 00:22:58.309 Starting namespace attribute notice tests for all controllers... 00:22:58.309 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:58.309 aer_cb - Changed Namespace 00:22:58.309 Cleaning up... 00:22:58.309 16:05:37 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:58.309 16:05:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.309 16:05:37 -- common/autotest_common.sh@10 -- # set +x 00:22:58.569 16:05:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.569 16:05:38 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:58.569 16:05:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.569 16:05:38 -- common/autotest_common.sh@10 -- # set +x 00:22:58.569 16:05:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.569 16:05:38 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.569 16:05:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.569 16:05:38 -- common/autotest_common.sh@10 -- # set +x 00:22:58.569 16:05:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.569 16:05:38 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:58.569 16:05:38 -- host/aer.sh@51 -- # nvmftestfini 00:22:58.569 16:05:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:58.569 16:05:38 -- nvmf/common.sh@117 -- # sync 00:22:58.569 16:05:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:58.569 16:05:38 -- nvmf/common.sh@120 -- # set +e 00:22:58.569 16:05:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:58.569 16:05:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:58.569 rmmod nvme_tcp 00:22:58.569 rmmod nvme_fabrics 00:22:58.828 rmmod nvme_keyring 00:22:58.829 16:05:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:58.829 16:05:38 -- nvmf/common.sh@124 -- # set -e 00:22:58.829 16:05:38 -- nvmf/common.sh@125 -- # return 0 00:22:58.829 16:05:38 -- nvmf/common.sh@478 -- # '[' -n 2522861 ']' 00:22:58.829 16:05:38 -- nvmf/common.sh@479 -- # killprocess 2522861 00:22:58.829 16:05:38 -- common/autotest_common.sh@936 -- # '[' -z 2522861 ']' 00:22:58.829 16:05:38 -- common/autotest_common.sh@940 -- # kill -0 2522861 00:22:58.829 16:05:38 -- common/autotest_common.sh@941 -- # uname 00:22:58.829 16:05:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:58.829 16:05:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2522861 00:22:58.829 16:05:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:58.829 16:05:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:58.829 16:05:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2522861' 00:22:58.829 killing process with pid 2522861 00:22:58.829 16:05:38 -- common/autotest_common.sh@955 -- # kill 2522861 00:22:58.829 [2024-04-26 16:05:38.321399] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:58.829 16:05:38 -- common/autotest_common.sh@960 -- # wait 2522861 00:23:00.209 16:05:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:00.209 16:05:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:00.209 16:05:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:00.209 16:05:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.209 16:05:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.209 16:05:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.209 16:05:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.209 16:05:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.119 16:05:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.119 00:23:02.119 real 0m10.444s 00:23:02.119 user 0m11.254s 00:23:02.119 sys 0m4.437s 00:23:02.119 16:05:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:02.119 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:23:02.119 ************************************ 00:23:02.119 END TEST nvmf_aer 00:23:02.119 ************************************ 00:23:02.119 16:05:41 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:02.119 16:05:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:02.119 16:05:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:02.119 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:23:02.378 ************************************ 00:23:02.378 START TEST nvmf_async_init 00:23:02.378 ************************************ 00:23:02.378 16:05:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:02.378 * Looking for test storage... 00:23:02.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.378 16:05:41 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.378 16:05:41 -- nvmf/common.sh@7 -- # uname -s 00:23:02.378 16:05:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.378 16:05:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.378 16:05:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.378 16:05:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.378 16:05:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.378 16:05:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.378 16:05:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.378 16:05:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.378 16:05:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.378 16:05:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.378 16:05:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:02.378 16:05:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:02.378 16:05:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.378 16:05:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.378 16:05:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.378 16:05:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.378 16:05:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.378 16:05:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.378 16:05:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.378 16:05:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.378 16:05:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.379 16:05:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.379 16:05:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.379 16:05:41 -- paths/export.sh@5 -- # export PATH 00:23:02.379 16:05:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.379 16:05:41 -- nvmf/common.sh@47 -- # : 0 00:23:02.379 16:05:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.379 16:05:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.379 16:05:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.379 16:05:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.379 16:05:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.379 16:05:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.379 16:05:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.379 16:05:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.379 16:05:41 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:02.379 16:05:41 -- host/async_init.sh@14 -- # null_block_size=512 00:23:02.379 16:05:41 -- host/async_init.sh@15 -- # null_bdev=null0 00:23:02.379 16:05:41 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:02.379 16:05:41 -- host/async_init.sh@20 -- # uuidgen 00:23:02.379 16:05:41 -- host/async_init.sh@20 -- # tr -d - 00:23:02.379 16:05:41 -- host/async_init.sh@20 -- # nguid=6840df47241f437493f81728d86032c3 00:23:02.379 16:05:41 -- host/async_init.sh@22 -- # nvmftestinit 00:23:02.379 16:05:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:02.379 16:05:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.379 16:05:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:02.379 16:05:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:02.379 16:05:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:02.379 16:05:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.379 16:05:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.379 16:05:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.379 16:05:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:02.379 16:05:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:02.379 16:05:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.379 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:23:07.664 16:05:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:07.664 16:05:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.664 16:05:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.664 16:05:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.664 16:05:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.664 16:05:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.664 16:05:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.664 16:05:47 -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.664 16:05:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.664 16:05:47 -- nvmf/common.sh@296 -- # e810=() 00:23:07.664 16:05:47 -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.664 16:05:47 -- nvmf/common.sh@297 -- # x722=() 00:23:07.664 16:05:47 -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.664 16:05:47 -- nvmf/common.sh@298 -- # mlx=() 00:23:07.664 16:05:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.664 16:05:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.664 16:05:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.664 16:05:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:07.664 16:05:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.664 16:05:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.664 16:05:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:07.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:07.664 16:05:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.664 16:05:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:07.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:07.664 16:05:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.664 16:05:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.664 16:05:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.664 16:05:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:07.664 16:05:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.664 16:05:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:07.664 Found net devices under 0000:86:00.0: cvl_0_0 00:23:07.664 16:05:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.664 16:05:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.664 16:05:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.664 16:05:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:07.664 16:05:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.664 16:05:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:07.664 Found net devices under 0000:86:00.1: cvl_0_1 00:23:07.664 16:05:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.664 16:05:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:07.664 16:05:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:07.664 16:05:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:07.664 16:05:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:07.664 16:05:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.664 16:05:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.664 16:05:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.664 16:05:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.664 16:05:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.664 16:05:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.664 16:05:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.664 16:05:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.664 16:05:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.664 16:05:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:07.664 16:05:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:07.664 16:05:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.664 16:05:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.664 16:05:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.664 16:05:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.664 16:05:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:07.664 16:05:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.664 16:05:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.664 16:05:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.664 16:05:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:23:07.664 00:23:07.664 --- 10.0.0.2 ping statistics --- 00:23:07.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.664 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:23:07.664 16:05:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:23:07.925 00:23:07.925 --- 10.0.0.1 ping statistics --- 00:23:07.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.925 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:23:07.925 16:05:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.925 16:05:47 -- nvmf/common.sh@411 -- # return 0 00:23:07.925 16:05:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:07.925 16:05:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.925 16:05:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:07.925 16:05:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:07.925 16:05:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.925 16:05:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:07.925 16:05:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:07.925 16:05:47 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:07.925 16:05:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:07.925 16:05:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:07.925 16:05:47 -- common/autotest_common.sh@10 -- # set +x 00:23:07.925 16:05:47 -- nvmf/common.sh@470 -- # nvmfpid=2526729 00:23:07.925 16:05:47 -- nvmf/common.sh@471 -- # waitforlisten 2526729 00:23:07.925 16:05:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:07.925 16:05:47 -- common/autotest_common.sh@817 -- # '[' -z 2526729 ']' 00:23:07.925 16:05:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.925 16:05:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.925 16:05:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.925 16:05:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.925 16:05:47 -- common/autotest_common.sh@10 -- # set +x 00:23:07.925 [2024-04-26 16:05:47.460087] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:07.925 [2024-04-26 16:05:47.460174] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.925 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.925 [2024-04-26 16:05:47.569483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.185 [2024-04-26 16:05:47.797830] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.185 [2024-04-26 16:05:47.797880] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.185 [2024-04-26 16:05:47.797891] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.185 [2024-04-26 16:05:47.797901] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.185 [2024-04-26 16:05:47.797912] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.185 [2024-04-26 16:05:47.797947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.753 16:05:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:08.753 16:05:48 -- common/autotest_common.sh@850 -- # return 0 00:23:08.753 16:05:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:08.753 16:05:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:08.753 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:08.753 16:05:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.753 16:05:48 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:08.753 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.753 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:08.753 [2024-04-26 16:05:48.267464] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.753 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.753 16:05:48 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:08.753 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.753 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:08.753 null0 00:23:08.753 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.753 16:05:48 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:08.753 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.753 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:08.753 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.753 16:05:48 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:08.753 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.753 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:08.753 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.753 16:05:48 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6840df47241f437493f81728d86032c3 00:23:08.753 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.753 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:08.753 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.753 16:05:48 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:08.753 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.753 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:08.753 [2024-04-26 16:05:48.307709] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.753 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.753 16:05:48 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:08.753 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.753 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.013 nvme0n1 00:23:09.013 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.014 16:05:48 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:09.014 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.014 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.014 [ 00:23:09.014 { 00:23:09.014 "name": "nvme0n1", 00:23:09.014 "aliases": [ 00:23:09.014 "6840df47-241f-4374-93f8-1728d86032c3" 00:23:09.014 ], 00:23:09.014 "product_name": "NVMe disk", 00:23:09.014 "block_size": 512, 00:23:09.014 "num_blocks": 2097152, 00:23:09.014 "uuid": "6840df47-241f-4374-93f8-1728d86032c3", 00:23:09.014 "assigned_rate_limits": { 00:23:09.014 "rw_ios_per_sec": 0, 00:23:09.014 "rw_mbytes_per_sec": 0, 00:23:09.014 "r_mbytes_per_sec": 0, 00:23:09.014 "w_mbytes_per_sec": 0 00:23:09.014 }, 00:23:09.014 "claimed": false, 00:23:09.014 "zoned": false, 00:23:09.014 "supported_io_types": { 00:23:09.014 "read": true, 00:23:09.014 "write": true, 00:23:09.014 "unmap": false, 00:23:09.014 "write_zeroes": true, 00:23:09.014 "flush": true, 00:23:09.014 "reset": true, 00:23:09.014 "compare": true, 00:23:09.014 "compare_and_write": true, 00:23:09.014 "abort": true, 00:23:09.014 "nvme_admin": true, 00:23:09.014 "nvme_io": true 00:23:09.014 }, 00:23:09.014 "memory_domains": [ 00:23:09.014 { 00:23:09.014 "dma_device_id": "system", 00:23:09.014 "dma_device_type": 1 00:23:09.014 } 00:23:09.014 ], 00:23:09.014 "driver_specific": { 00:23:09.014 "nvme": [ 00:23:09.014 { 00:23:09.014 "trid": { 00:23:09.014 "trtype": "TCP", 00:23:09.014 "adrfam": "IPv4", 00:23:09.014 "traddr": "10.0.0.2", 00:23:09.014 "trsvcid": "4420", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:09.014 }, 00:23:09.014 "ctrlr_data": { 00:23:09.014 "cntlid": 1, 00:23:09.014 "vendor_id": "0x8086", 00:23:09.014 "model_number": "SPDK bdev Controller", 00:23:09.014 "serial_number": "00000000000000000000", 00:23:09.014 "firmware_revision": "24.05", 00:23:09.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:09.014 "oacs": { 00:23:09.014 "security": 0, 00:23:09.014 "format": 0, 00:23:09.014 "firmware": 0, 00:23:09.014 "ns_manage": 0 00:23:09.014 }, 00:23:09.014 "multi_ctrlr": true, 00:23:09.014 "ana_reporting": false 00:23:09.014 }, 00:23:09.014 "vs": { 00:23:09.014 "nvme_version": "1.3" 00:23:09.014 }, 00:23:09.014 "ns_data": { 00:23:09.014 "id": 1, 00:23:09.014 "can_share": true 00:23:09.014 } 00:23:09.014 } 00:23:09.014 ], 00:23:09.014 "mp_policy": "active_passive" 00:23:09.014 } 00:23:09.014 } 00:23:09.014 ] 00:23:09.014 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.014 16:05:48 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:09.014 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.014 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.014 [2024-04-26 16:05:48.556937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:09.014 [2024-04-26 16:05:48.557037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006840 (9): Bad file descriptor 00:23:09.014 [2024-04-26 16:05:48.689187] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:09.014 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.014 16:05:48 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:09.014 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.014 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.274 [ 00:23:09.274 { 00:23:09.274 "name": "nvme0n1", 00:23:09.274 "aliases": [ 00:23:09.274 "6840df47-241f-4374-93f8-1728d86032c3" 00:23:09.274 ], 00:23:09.274 "product_name": "NVMe disk", 00:23:09.274 "block_size": 512, 00:23:09.274 "num_blocks": 2097152, 00:23:09.274 "uuid": "6840df47-241f-4374-93f8-1728d86032c3", 00:23:09.274 "assigned_rate_limits": { 00:23:09.274 "rw_ios_per_sec": 0, 00:23:09.274 "rw_mbytes_per_sec": 0, 00:23:09.274 "r_mbytes_per_sec": 0, 00:23:09.274 "w_mbytes_per_sec": 0 00:23:09.274 }, 00:23:09.274 "claimed": false, 00:23:09.274 "zoned": false, 00:23:09.274 "supported_io_types": { 00:23:09.274 "read": true, 00:23:09.274 "write": true, 00:23:09.274 "unmap": false, 00:23:09.274 "write_zeroes": true, 00:23:09.274 "flush": true, 00:23:09.274 "reset": true, 00:23:09.274 "compare": true, 00:23:09.274 "compare_and_write": true, 00:23:09.274 "abort": true, 00:23:09.274 "nvme_admin": true, 00:23:09.274 "nvme_io": true 00:23:09.274 }, 00:23:09.274 "memory_domains": [ 00:23:09.274 { 00:23:09.274 "dma_device_id": "system", 00:23:09.274 "dma_device_type": 1 00:23:09.274 } 00:23:09.274 ], 00:23:09.274 "driver_specific": { 00:23:09.274 "nvme": [ 00:23:09.274 { 00:23:09.274 "trid": { 00:23:09.274 "trtype": "TCP", 00:23:09.274 "adrfam": "IPv4", 00:23:09.274 "traddr": "10.0.0.2", 00:23:09.274 "trsvcid": "4420", 00:23:09.274 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:09.274 }, 00:23:09.274 "ctrlr_data": { 00:23:09.274 "cntlid": 2, 00:23:09.274 "vendor_id": "0x8086", 00:23:09.274 "model_number": "SPDK bdev Controller", 00:23:09.274 "serial_number": "00000000000000000000", 00:23:09.274 "firmware_revision": "24.05", 00:23:09.274 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:09.274 "oacs": { 00:23:09.274 "security": 0, 00:23:09.274 "format": 0, 00:23:09.274 "firmware": 0, 00:23:09.274 "ns_manage": 0 00:23:09.274 }, 00:23:09.274 "multi_ctrlr": true, 00:23:09.274 "ana_reporting": false 00:23:09.274 }, 00:23:09.274 "vs": { 00:23:09.274 "nvme_version": "1.3" 00:23:09.274 }, 00:23:09.274 "ns_data": { 00:23:09.274 "id": 1, 00:23:09.274 "can_share": true 00:23:09.274 } 00:23:09.274 } 00:23:09.274 ], 00:23:09.274 "mp_policy": "active_passive" 00:23:09.274 } 00:23:09.274 } 00:23:09.274 ] 00:23:09.274 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.274 16:05:48 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.274 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.274 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.274 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.274 16:05:48 -- host/async_init.sh@53 -- # mktemp 00:23:09.274 16:05:48 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.B7G4KuApAE 00:23:09.274 16:05:48 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:09.274 16:05:48 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.B7G4KuApAE 00:23:09.274 16:05:48 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:09.274 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.274 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.274 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.274 16:05:48 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:09.274 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.274 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.274 [2024-04-26 16:05:48.737561] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.274 [2024-04-26 16:05:48.737729] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:09.274 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.274 16:05:48 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B7G4KuApAE 00:23:09.274 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.274 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.274 [2024-04-26 16:05:48.745580] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.274 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.274 16:05:48 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B7G4KuApAE 00:23:09.274 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.274 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.274 [2024-04-26 16:05:48.753584] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.274 [2024-04-26 16:05:48.753657] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:09.274 nvme0n1 00:23:09.274 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.274 16:05:48 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:09.274 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.274 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.274 [ 00:23:09.274 { 00:23:09.274 "name": "nvme0n1", 00:23:09.274 "aliases": [ 00:23:09.274 "6840df47-241f-4374-93f8-1728d86032c3" 00:23:09.274 ], 00:23:09.274 "product_name": "NVMe disk", 00:23:09.274 "block_size": 512, 00:23:09.274 "num_blocks": 2097152, 00:23:09.274 "uuid": "6840df47-241f-4374-93f8-1728d86032c3", 00:23:09.274 "assigned_rate_limits": { 00:23:09.274 "rw_ios_per_sec": 0, 00:23:09.274 "rw_mbytes_per_sec": 0, 00:23:09.274 "r_mbytes_per_sec": 0, 00:23:09.274 "w_mbytes_per_sec": 0 00:23:09.274 }, 00:23:09.274 "claimed": false, 00:23:09.274 "zoned": false, 00:23:09.274 "supported_io_types": { 00:23:09.274 "read": true, 00:23:09.274 "write": true, 00:23:09.274 "unmap": false, 00:23:09.274 "write_zeroes": true, 00:23:09.274 "flush": true, 00:23:09.274 "reset": true, 00:23:09.275 "compare": true, 00:23:09.275 "compare_and_write": true, 00:23:09.275 "abort": true, 00:23:09.275 "nvme_admin": true, 00:23:09.275 "nvme_io": true 00:23:09.275 }, 00:23:09.275 "memory_domains": [ 00:23:09.275 { 00:23:09.275 "dma_device_id": "system", 00:23:09.275 "dma_device_type": 1 00:23:09.275 } 00:23:09.275 ], 00:23:09.275 "driver_specific": { 00:23:09.275 "nvme": [ 00:23:09.275 { 00:23:09.275 "trid": { 00:23:09.275 "trtype": "TCP", 00:23:09.275 "adrfam": "IPv4", 00:23:09.275 "traddr": "10.0.0.2", 00:23:09.275 "trsvcid": "4421", 00:23:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:09.275 }, 00:23:09.275 "ctrlr_data": { 00:23:09.275 "cntlid": 3, 00:23:09.275 "vendor_id": "0x8086", 00:23:09.275 "model_number": "SPDK bdev Controller", 00:23:09.275 "serial_number": "00000000000000000000", 00:23:09.275 "firmware_revision": "24.05", 00:23:09.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:09.275 "oacs": { 00:23:09.275 "security": 0, 00:23:09.275 "format": 0, 00:23:09.275 "firmware": 0, 00:23:09.275 "ns_manage": 0 00:23:09.275 }, 00:23:09.275 "multi_ctrlr": true, 00:23:09.275 "ana_reporting": false 00:23:09.275 }, 00:23:09.275 "vs": { 00:23:09.275 "nvme_version": "1.3" 00:23:09.275 }, 00:23:09.275 "ns_data": { 00:23:09.275 "id": 1, 00:23:09.275 "can_share": true 00:23:09.275 } 00:23:09.275 } 00:23:09.275 ], 00:23:09.275 "mp_policy": "active_passive" 00:23:09.275 } 00:23:09.275 } 00:23:09.275 ] 00:23:09.275 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.275 16:05:48 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.275 16:05:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.275 16:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:09.275 16:05:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.275 16:05:48 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.B7G4KuApAE 00:23:09.275 16:05:48 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:09.275 16:05:48 -- host/async_init.sh@78 -- # nvmftestfini 00:23:09.275 16:05:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:09.275 16:05:48 -- nvmf/common.sh@117 -- # sync 00:23:09.275 16:05:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.275 16:05:48 -- nvmf/common.sh@120 -- # set +e 00:23:09.275 16:05:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.275 16:05:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.275 rmmod nvme_tcp 00:23:09.275 rmmod nvme_fabrics 00:23:09.275 rmmod nvme_keyring 00:23:09.275 16:05:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.275 16:05:48 -- nvmf/common.sh@124 -- # set -e 00:23:09.275 16:05:48 -- nvmf/common.sh@125 -- # return 0 00:23:09.275 16:05:48 -- nvmf/common.sh@478 -- # '[' -n 2526729 ']' 00:23:09.275 16:05:48 -- nvmf/common.sh@479 -- # killprocess 2526729 00:23:09.275 16:05:48 -- common/autotest_common.sh@936 -- # '[' -z 2526729 ']' 00:23:09.275 16:05:48 -- common/autotest_common.sh@940 -- # kill -0 2526729 00:23:09.275 16:05:48 -- common/autotest_common.sh@941 -- # uname 00:23:09.275 16:05:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.275 16:05:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2526729 00:23:09.275 16:05:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:09.275 16:05:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:09.275 16:05:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2526729' 00:23:09.275 killing process with pid 2526729 00:23:09.275 16:05:48 -- common/autotest_common.sh@955 -- # kill 2526729 00:23:09.275 [2024-04-26 16:05:48.947601] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:09.275 [2024-04-26 16:05:48.947637] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:09.275 16:05:48 -- common/autotest_common.sh@960 -- # wait 2526729 00:23:10.654 16:05:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:10.654 16:05:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:10.654 16:05:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:10.654 16:05:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.654 16:05:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.654 16:05:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.654 16:05:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.654 16:05:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.192 16:05:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.192 00:23:13.192 real 0m10.425s 00:23:13.192 user 0m4.380s 00:23:13.192 sys 0m4.525s 00:23:13.192 16:05:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:13.193 16:05:52 -- common/autotest_common.sh@10 -- # set +x 00:23:13.193 ************************************ 00:23:13.193 END TEST nvmf_async_init 00:23:13.193 ************************************ 00:23:13.193 16:05:52 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:13.193 16:05:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:13.193 16:05:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:13.193 16:05:52 -- common/autotest_common.sh@10 -- # set +x 00:23:13.193 ************************************ 00:23:13.193 START TEST dma 00:23:13.193 ************************************ 00:23:13.193 16:05:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:13.193 * Looking for test storage... 00:23:13.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.193 16:05:52 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.193 16:05:52 -- nvmf/common.sh@7 -- # uname -s 00:23:13.193 16:05:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.193 16:05:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.193 16:05:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.193 16:05:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.193 16:05:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.193 16:05:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.193 16:05:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.193 16:05:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.193 16:05:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.193 16:05:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.193 16:05:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:13.193 16:05:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:13.193 16:05:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.193 16:05:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.193 16:05:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.193 16:05:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.193 16:05:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.193 16:05:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.193 16:05:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.193 16:05:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.193 16:05:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.193 16:05:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.193 16:05:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.193 16:05:52 -- paths/export.sh@5 -- # export PATH 00:23:13.193 16:05:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.193 16:05:52 -- nvmf/common.sh@47 -- # : 0 00:23:13.193 16:05:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.193 16:05:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.193 16:05:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.193 16:05:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.193 16:05:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.193 16:05:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.193 16:05:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.193 16:05:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.193 16:05:52 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:13.193 16:05:52 -- host/dma.sh@13 -- # exit 0 00:23:13.193 00:23:13.193 real 0m0.086s 00:23:13.193 user 0m0.035s 00:23:13.193 sys 0m0.056s 00:23:13.193 16:05:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:13.193 16:05:52 -- common/autotest_common.sh@10 -- # set +x 00:23:13.193 ************************************ 00:23:13.193 END TEST dma 00:23:13.193 ************************************ 00:23:13.193 16:05:52 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:13.193 16:05:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:13.193 16:05:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:13.193 16:05:52 -- common/autotest_common.sh@10 -- # set +x 00:23:13.193 ************************************ 00:23:13.193 START TEST nvmf_identify 00:23:13.193 ************************************ 00:23:13.193 16:05:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:13.193 * Looking for test storage... 00:23:13.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.193 16:05:52 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.193 16:05:52 -- nvmf/common.sh@7 -- # uname -s 00:23:13.193 16:05:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.193 16:05:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.193 16:05:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.193 16:05:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.193 16:05:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.193 16:05:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.193 16:05:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.193 16:05:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.193 16:05:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.193 16:05:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.193 16:05:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:13.193 16:05:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:13.193 16:05:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.193 16:05:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.193 16:05:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.193 16:05:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.193 16:05:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.193 16:05:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.193 16:05:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.193 16:05:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.193 16:05:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.193 16:05:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.193 16:05:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.193 16:05:52 -- paths/export.sh@5 -- # export PATH 00:23:13.193 16:05:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.193 16:05:52 -- nvmf/common.sh@47 -- # : 0 00:23:13.193 16:05:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.193 16:05:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.193 16:05:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.193 16:05:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.193 16:05:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.194 16:05:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.194 16:05:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.194 16:05:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.194 16:05:52 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:13.194 16:05:52 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:13.194 16:05:52 -- host/identify.sh@14 -- # nvmftestinit 00:23:13.194 16:05:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:13.194 16:05:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.194 16:05:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:13.194 16:05:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:13.194 16:05:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:13.194 16:05:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.194 16:05:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.194 16:05:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.194 16:05:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:13.194 16:05:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:13.194 16:05:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.194 16:05:52 -- common/autotest_common.sh@10 -- # set +x 00:23:18.479 16:05:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:18.479 16:05:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.479 16:05:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.479 16:05:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.479 16:05:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.479 16:05:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.479 16:05:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.479 16:05:58 -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.479 16:05:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.479 16:05:58 -- nvmf/common.sh@296 -- # e810=() 00:23:18.479 16:05:58 -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.479 16:05:58 -- nvmf/common.sh@297 -- # x722=() 00:23:18.479 16:05:58 -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.479 16:05:58 -- nvmf/common.sh@298 -- # mlx=() 00:23:18.479 16:05:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.479 16:05:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.479 16:05:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.479 16:05:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.479 16:05:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.479 16:05:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.479 16:05:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:18.479 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:18.479 16:05:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.479 16:05:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:18.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:18.479 16:05:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.479 16:05:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.479 16:05:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.479 16:05:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:18.479 16:05:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.479 16:05:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:18.479 Found net devices under 0000:86:00.0: cvl_0_0 00:23:18.479 16:05:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.479 16:05:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.479 16:05:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.479 16:05:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:18.479 16:05:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.479 16:05:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:18.479 Found net devices under 0000:86:00.1: cvl_0_1 00:23:18.479 16:05:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.479 16:05:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:18.479 16:05:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:18.479 16:05:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:18.479 16:05:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:18.479 16:05:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.479 16:05:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.479 16:05:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.479 16:05:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.479 16:05:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.479 16:05:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.479 16:05:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.479 16:05:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.479 16:05:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.479 16:05:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.479 16:05:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.479 16:05:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.479 16:05:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.479 16:05:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.739 16:05:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.739 16:05:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:18.739 16:05:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.739 16:05:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.739 16:05:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.739 16:05:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:23:18.739 00:23:18.739 --- 10.0.0.2 ping statistics --- 00:23:18.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.739 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:23:18.739 16:05:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:23:18.739 00:23:18.739 --- 10.0.0.1 ping statistics --- 00:23:18.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.739 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:23:18.739 16:05:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.739 16:05:58 -- nvmf/common.sh@411 -- # return 0 00:23:18.739 16:05:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:18.739 16:05:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.739 16:05:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:18.739 16:05:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:18.739 16:05:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.739 16:05:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:18.739 16:05:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:18.739 16:05:58 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:18.739 16:05:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:18.739 16:05:58 -- common/autotest_common.sh@10 -- # set +x 00:23:18.739 16:05:58 -- host/identify.sh@19 -- # nvmfpid=2530783 00:23:18.739 16:05:58 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:18.739 16:05:58 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.739 16:05:58 -- host/identify.sh@23 -- # waitforlisten 2530783 00:23:18.739 16:05:58 -- common/autotest_common.sh@817 -- # '[' -z 2530783 ']' 00:23:18.739 16:05:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.739 16:05:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:18.739 16:05:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.739 16:05:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:18.739 16:05:58 -- common/autotest_common.sh@10 -- # set +x 00:23:18.999 [2024-04-26 16:05:58.422832] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:18.999 [2024-04-26 16:05:58.422915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.999 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.999 [2024-04-26 16:05:58.529625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.259 [2024-04-26 16:05:58.763436] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.259 [2024-04-26 16:05:58.763484] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.259 [2024-04-26 16:05:58.763495] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.259 [2024-04-26 16:05:58.763522] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.259 [2024-04-26 16:05:58.763531] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.259 [2024-04-26 16:05:58.763599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.259 [2024-04-26 16:05:58.763618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.259 [2024-04-26 16:05:58.763711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.259 [2024-04-26 16:05:58.763719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.828 16:05:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:19.828 16:05:59 -- common/autotest_common.sh@850 -- # return 0 00:23:19.828 16:05:59 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.828 16:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.828 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.828 [2024-04-26 16:05:59.219838] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.828 16:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.828 16:05:59 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:19.828 16:05:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:19.828 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.828 16:05:59 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:19.828 16:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.828 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.828 Malloc0 00:23:19.828 16:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.828 16:05:59 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.828 16:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.828 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.828 16:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.828 16:05:59 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:19.828 16:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.829 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.829 16:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.829 16:05:59 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.829 16:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.829 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.829 [2024-04-26 16:05:59.387508] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.829 16:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.829 16:05:59 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:19.829 16:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.829 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.829 16:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.829 16:05:59 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:19.829 16:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.829 16:05:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.829 [2024-04-26 16:05:59.403246] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:19.829 [ 00:23:19.829 { 00:23:19.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:19.829 "subtype": "Discovery", 00:23:19.829 "listen_addresses": [ 00:23:19.829 { 00:23:19.829 "transport": "TCP", 00:23:19.829 "trtype": "TCP", 00:23:19.829 "adrfam": "IPv4", 00:23:19.829 "traddr": "10.0.0.2", 00:23:19.829 "trsvcid": "4420" 00:23:19.829 } 00:23:19.829 ], 00:23:19.829 "allow_any_host": true, 00:23:19.829 "hosts": [] 00:23:19.829 }, 00:23:19.829 { 00:23:19.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.829 "subtype": "NVMe", 00:23:19.829 "listen_addresses": [ 00:23:19.829 { 00:23:19.829 "transport": "TCP", 00:23:19.829 "trtype": "TCP", 00:23:19.829 "adrfam": "IPv4", 00:23:19.829 "traddr": "10.0.0.2", 00:23:19.829 "trsvcid": "4420" 00:23:19.829 } 00:23:19.829 ], 00:23:19.829 "allow_any_host": true, 00:23:19.829 "hosts": [], 00:23:19.829 "serial_number": "SPDK00000000000001", 00:23:19.829 "model_number": "SPDK bdev Controller", 00:23:19.829 "max_namespaces": 32, 00:23:19.829 "min_cntlid": 1, 00:23:19.829 "max_cntlid": 65519, 00:23:19.829 "namespaces": [ 00:23:19.829 { 00:23:19.829 "nsid": 1, 00:23:19.829 "bdev_name": "Malloc0", 00:23:19.829 "name": "Malloc0", 00:23:19.829 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:19.829 "eui64": "ABCDEF0123456789", 00:23:19.829 "uuid": "0779a0a8-d854-4fbe-9c90-1dd30c75032c" 00:23:19.829 } 00:23:19.829 ] 00:23:19.829 } 00:23:19.829 ] 00:23:19.829 16:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.829 16:05:59 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:19.829 [2024-04-26 16:05:59.454482] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:19.829 [2024-04-26 16:05:59.454541] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531028 ] 00:23:19.829 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.829 [2024-04-26 16:05:59.500414] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:19.829 [2024-04-26 16:05:59.500520] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:19.829 [2024-04-26 16:05:59.500530] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:19.829 [2024-04-26 16:05:59.500549] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:19.829 [2024-04-26 16:05:59.500562] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:19.829 [2024-04-26 16:05:59.501131] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:19.829 [2024-04-26 16:05:59.501175] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:23:20.091 [2024-04-26 16:05:59.516091] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:20.091 [2024-04-26 16:05:59.516117] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:20.091 [2024-04-26 16:05:59.516124] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:20.091 [2024-04-26 16:05:59.516130] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:20.091 [2024-04-26 16:05:59.516181] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.091 [2024-04-26 16:05:59.516190] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.091 [2024-04-26 16:05:59.516196] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.091 [2024-04-26 16:05:59.516220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:20.091 [2024-04-26 16:05:59.516246] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.091 [2024-04-26 16:05:59.524088] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.091 [2024-04-26 16:05:59.524108] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.091 [2024-04-26 16:05:59.524114] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.091 [2024-04-26 16:05:59.524124] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.091 [2024-04-26 16:05:59.524144] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:20.091 [2024-04-26 16:05:59.524157] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:20.091 [2024-04-26 16:05:59.524166] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:20.091 [2024-04-26 16:05:59.524184] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.091 [2024-04-26 16:05:59.524191] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.091 [2024-04-26 16:05:59.524199] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.091 [2024-04-26 16:05:59.524212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.091 [2024-04-26 16:05:59.524232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.091 [2024-04-26 16:05:59.524486] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.091 [2024-04-26 16:05:59.524503] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.091 [2024-04-26 16:05:59.524508] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.091 [2024-04-26 16:05:59.524514] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.091 [2024-04-26 16:05:59.524527] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:20.091 [2024-04-26 16:05:59.524543] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:20.091 [2024-04-26 16:05:59.524556] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.091 [2024-04-26 16:05:59.524563] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.091 [2024-04-26 16:05:59.524571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.091 [2024-04-26 16:05:59.524585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.091 [2024-04-26 16:05:59.524604] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.091 [2024-04-26 16:05:59.524872] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.092 [2024-04-26 16:05:59.524882] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.092 [2024-04-26 16:05:59.524887] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.524892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.092 [2024-04-26 16:05:59.524902] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:20.092 [2024-04-26 16:05:59.524914] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:20.092 [2024-04-26 16:05:59.524923] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.524929] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.524935] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.092 [2024-04-26 16:05:59.524948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.092 [2024-04-26 16:05:59.524963] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.092 [2024-04-26 16:05:59.525121] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.092 [2024-04-26 16:05:59.525136] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.092 [2024-04-26 16:05:59.525140] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525146] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.092 [2024-04-26 16:05:59.525155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:20.092 [2024-04-26 16:05:59.525170] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525177] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525186] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.092 [2024-04-26 16:05:59.525197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.092 [2024-04-26 16:05:59.525215] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.092 [2024-04-26 16:05:59.525367] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.092 [2024-04-26 16:05:59.525381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.092 [2024-04-26 16:05:59.525386] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525391] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.092 [2024-04-26 16:05:59.525399] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:20.092 [2024-04-26 16:05:59.525407] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:20.092 [2024-04-26 16:05:59.525421] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:20.092 [2024-04-26 16:05:59.525532] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:20.092 [2024-04-26 16:05:59.525540] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:20.092 [2024-04-26 16:05:59.525564] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525570] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525576] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.092 [2024-04-26 16:05:59.525590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.092 [2024-04-26 16:05:59.525608] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.092 [2024-04-26 16:05:59.525756] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.092 [2024-04-26 16:05:59.525772] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.092 [2024-04-26 16:05:59.525777] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525783] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.092 [2024-04-26 16:05:59.525791] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:20.092 [2024-04-26 16:05:59.525806] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525815] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.525821] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.092 [2024-04-26 16:05:59.525831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.092 [2024-04-26 16:05:59.525848] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.092 [2024-04-26 16:05:59.525986] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.092 [2024-04-26 16:05:59.525999] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.092 [2024-04-26 16:05:59.526003] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.526009] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.092 [2024-04-26 16:05:59.526016] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:20.092 [2024-04-26 16:05:59.526024] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:20.092 [2024-04-26 16:05:59.526035] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:20.092 [2024-04-26 16:05:59.526050] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:20.092 [2024-04-26 16:05:59.526075] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.526082] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.092 [2024-04-26 16:05:59.526096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.092 [2024-04-26 16:05:59.526113] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.092 [2024-04-26 16:05:59.526355] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.092 [2024-04-26 16:05:59.526370] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.092 [2024-04-26 16:05:59.526375] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.526381] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:23:20.092 [2024-04-26 16:05:59.526391] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:20.092 [2024-04-26 16:05:59.526398] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.526412] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.526419] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.526583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.092 [2024-04-26 16:05:59.526591] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.092 [2024-04-26 16:05:59.526595] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.092 [2024-04-26 16:05:59.526601] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.092 [2024-04-26 16:05:59.526615] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:20.092 [2024-04-26 16:05:59.526622] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:20.093 [2024-04-26 16:05:59.526629] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:20.093 [2024-04-26 16:05:59.526639] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:20.093 [2024-04-26 16:05:59.526645] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:20.093 [2024-04-26 16:05:59.526652] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:20.093 [2024-04-26 16:05:59.526666] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:20.093 [2024-04-26 16:05:59.526678] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.526686] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.526692] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.093 [2024-04-26 16:05:59.526704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.093 [2024-04-26 16:05:59.526720] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.093 [2024-04-26 16:05:59.527004] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.093 [2024-04-26 16:05:59.527015] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.093 [2024-04-26 16:05:59.527019] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527024] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.093 [2024-04-26 16:05:59.527034] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527042] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527049] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.093 [2024-04-26 16:05:59.527062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.093 [2024-04-26 16:05:59.527076] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527081] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527086] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:23:20.093 [2024-04-26 16:05:59.527094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.093 [2024-04-26 16:05:59.527101] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527109] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527114] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:23:20.093 [2024-04-26 16:05:59.527122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.093 [2024-04-26 16:05:59.527129] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527134] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527140] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.093 [2024-04-26 16:05:59.527148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.093 [2024-04-26 16:05:59.527155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:20.093 [2024-04-26 16:05:59.527169] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:20.093 [2024-04-26 16:05:59.527178] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527183] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.093 [2024-04-26 16:05:59.527193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.093 [2024-04-26 16:05:59.527210] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.093 [2024-04-26 16:05:59.527217] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:23:20.093 [2024-04-26 16:05:59.527223] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:23:20.093 [2024-04-26 16:05:59.527229] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.093 [2024-04-26 16:05:59.527235] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.093 [2024-04-26 16:05:59.527416] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.093 [2024-04-26 16:05:59.527430] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.093 [2024-04-26 16:05:59.527435] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527440] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.093 [2024-04-26 16:05:59.527448] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:20.093 [2024-04-26 16:05:59.527459] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:20.093 [2024-04-26 16:05:59.527478] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527485] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.093 [2024-04-26 16:05:59.527496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.093 [2024-04-26 16:05:59.527513] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.093 [2024-04-26 16:05:59.527821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.093 [2024-04-26 16:05:59.527830] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.093 [2024-04-26 16:05:59.527836] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.527841] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:23:20.093 [2024-04-26 16:05:59.527848] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:20.093 [2024-04-26 16:05:59.527858] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.528015] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.528021] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.572083] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.093 [2024-04-26 16:05:59.572104] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.093 [2024-04-26 16:05:59.572109] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.572116] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.093 [2024-04-26 16:05:59.572140] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:20.093 [2024-04-26 16:05:59.572180] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.572188] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.093 [2024-04-26 16:05:59.572202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.093 [2024-04-26 16:05:59.572213] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.093 [2024-04-26 16:05:59.572219] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.572224] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:20.094 [2024-04-26 16:05:59.572233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.094 [2024-04-26 16:05:59.572253] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.094 [2024-04-26 16:05:59.572261] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:20.094 [2024-04-26 16:05:59.572757] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.094 [2024-04-26 16:05:59.572766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.094 [2024-04-26 16:05:59.572771] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.572777] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:23:20.094 [2024-04-26 16:05:59.572784] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:23:20.094 [2024-04-26 16:05:59.572790] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.572799] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.572808] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.572815] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.094 [2024-04-26 16:05:59.572826] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.094 [2024-04-26 16:05:59.572831] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.572837] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:20.094 [2024-04-26 16:05:59.614284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.094 [2024-04-26 16:05:59.614305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.094 [2024-04-26 16:05:59.614310] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.614326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.094 [2024-04-26 16:05:59.614350] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.614357] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.094 [2024-04-26 16:05:59.614369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.094 [2024-04-26 16:05:59.614397] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.094 [2024-04-26 16:05:59.614570] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.094 [2024-04-26 16:05:59.614583] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.094 [2024-04-26 16:05:59.614588] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.614594] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:23:20.094 [2024-04-26 16:05:59.614600] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:23:20.094 [2024-04-26 16:05:59.614605] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.614786] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.614792] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.614894] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.094 [2024-04-26 16:05:59.614906] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.094 [2024-04-26 16:05:59.614911] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.614916] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.094 [2024-04-26 16:05:59.614934] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.614940] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.094 [2024-04-26 16:05:59.614952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.094 [2024-04-26 16:05:59.614980] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.094 [2024-04-26 16:05:59.615149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.094 [2024-04-26 16:05:59.615162] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.094 [2024-04-26 16:05:59.615167] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.615173] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:23:20.094 [2024-04-26 16:05:59.615179] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:23:20.094 [2024-04-26 16:05:59.615184] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.615193] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.615198] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.656438] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.094 [2024-04-26 16:05:59.656460] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.094 [2024-04-26 16:05:59.656465] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.094 [2024-04-26 16:05:59.656471] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.094 ===================================================== 00:23:20.094 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:20.094 ===================================================== 00:23:20.094 Controller Capabilities/Features 00:23:20.094 ================================ 00:23:20.094 Vendor ID: 0000 00:23:20.094 Subsystem Vendor ID: 0000 00:23:20.094 Serial Number: .................... 00:23:20.094 Model Number: ........................................ 00:23:20.094 Firmware Version: 24.05 00:23:20.094 Recommended Arb Burst: 0 00:23:20.094 IEEE OUI Identifier: 00 00 00 00:23:20.094 Multi-path I/O 00:23:20.094 May have multiple subsystem ports: No 00:23:20.094 May have multiple controllers: No 00:23:20.094 Associated with SR-IOV VF: No 00:23:20.094 Max Data Transfer Size: 131072 00:23:20.094 Max Number of Namespaces: 0 00:23:20.094 Max Number of I/O Queues: 1024 00:23:20.094 NVMe Specification Version (VS): 1.3 00:23:20.094 NVMe Specification Version (Identify): 1.3 00:23:20.094 Maximum Queue Entries: 128 00:23:20.094 Contiguous Queues Required: Yes 00:23:20.094 Arbitration Mechanisms Supported 00:23:20.094 Weighted Round Robin: Not Supported 00:23:20.094 Vendor Specific: Not Supported 00:23:20.094 Reset Timeout: 15000 ms 00:23:20.094 Doorbell Stride: 4 bytes 00:23:20.094 NVM Subsystem Reset: Not Supported 00:23:20.094 Command Sets Supported 00:23:20.094 NVM Command Set: Supported 00:23:20.094 Boot Partition: Not Supported 00:23:20.094 Memory Page Size Minimum: 4096 bytes 00:23:20.094 Memory Page Size Maximum: 4096 bytes 00:23:20.094 Persistent Memory Region: Not Supported 00:23:20.094 Optional Asynchronous Events Supported 00:23:20.094 Namespace Attribute Notices: Not Supported 00:23:20.094 Firmware Activation Notices: Not Supported 00:23:20.094 ANA Change Notices: Not Supported 00:23:20.094 PLE Aggregate Log Change Notices: Not Supported 00:23:20.094 LBA Status Info Alert Notices: Not Supported 00:23:20.094 EGE Aggregate Log Change Notices: Not Supported 00:23:20.094 Normal NVM Subsystem Shutdown event: Not Supported 00:23:20.094 Zone Descriptor Change Notices: Not Supported 00:23:20.094 Discovery Log Change Notices: Supported 00:23:20.094 Controller Attributes 00:23:20.094 128-bit Host Identifier: Not Supported 00:23:20.094 Non-Operational Permissive Mode: Not Supported 00:23:20.095 NVM Sets: Not Supported 00:23:20.095 Read Recovery Levels: Not Supported 00:23:20.095 Endurance Groups: Not Supported 00:23:20.095 Predictable Latency Mode: Not Supported 00:23:20.095 Traffic Based Keep ALive: Not Supported 00:23:20.095 Namespace Granularity: Not Supported 00:23:20.095 SQ Associations: Not Supported 00:23:20.095 UUID List: Not Supported 00:23:20.095 Multi-Domain Subsystem: Not Supported 00:23:20.095 Fixed Capacity Management: Not Supported 00:23:20.095 Variable Capacity Management: Not Supported 00:23:20.095 Delete Endurance Group: Not Supported 00:23:20.095 Delete NVM Set: Not Supported 00:23:20.095 Extended LBA Formats Supported: Not Supported 00:23:20.095 Flexible Data Placement Supported: Not Supported 00:23:20.095 00:23:20.095 Controller Memory Buffer Support 00:23:20.095 ================================ 00:23:20.095 Supported: No 00:23:20.095 00:23:20.095 Persistent Memory Region Support 00:23:20.095 ================================ 00:23:20.095 Supported: No 00:23:20.095 00:23:20.095 Admin Command Set Attributes 00:23:20.095 ============================ 00:23:20.095 Security Send/Receive: Not Supported 00:23:20.095 Format NVM: Not Supported 00:23:20.095 Firmware Activate/Download: Not Supported 00:23:20.095 Namespace Management: Not Supported 00:23:20.095 Device Self-Test: Not Supported 00:23:20.095 Directives: Not Supported 00:23:20.095 NVMe-MI: Not Supported 00:23:20.095 Virtualization Management: Not Supported 00:23:20.095 Doorbell Buffer Config: Not Supported 00:23:20.095 Get LBA Status Capability: Not Supported 00:23:20.095 Command & Feature Lockdown Capability: Not Supported 00:23:20.095 Abort Command Limit: 1 00:23:20.095 Async Event Request Limit: 4 00:23:20.095 Number of Firmware Slots: N/A 00:23:20.095 Firmware Slot 1 Read-Only: N/A 00:23:20.095 Firmware Activation Without Reset: N/A 00:23:20.095 Multiple Update Detection Support: N/A 00:23:20.095 Firmware Update Granularity: No Information Provided 00:23:20.095 Per-Namespace SMART Log: No 00:23:20.095 Asymmetric Namespace Access Log Page: Not Supported 00:23:20.095 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:20.095 Command Effects Log Page: Not Supported 00:23:20.095 Get Log Page Extended Data: Supported 00:23:20.095 Telemetry Log Pages: Not Supported 00:23:20.095 Persistent Event Log Pages: Not Supported 00:23:20.095 Supported Log Pages Log Page: May Support 00:23:20.095 Commands Supported & Effects Log Page: Not Supported 00:23:20.095 Feature Identifiers & Effects Log Page:May Support 00:23:20.095 NVMe-MI Commands & Effects Log Page: May Support 00:23:20.095 Data Area 4 for Telemetry Log: Not Supported 00:23:20.095 Error Log Page Entries Supported: 128 00:23:20.095 Keep Alive: Not Supported 00:23:20.095 00:23:20.095 NVM Command Set Attributes 00:23:20.095 ========================== 00:23:20.095 Submission Queue Entry Size 00:23:20.095 Max: 1 00:23:20.095 Min: 1 00:23:20.095 Completion Queue Entry Size 00:23:20.095 Max: 1 00:23:20.095 Min: 1 00:23:20.095 Number of Namespaces: 0 00:23:20.095 Compare Command: Not Supported 00:23:20.095 Write Uncorrectable Command: Not Supported 00:23:20.095 Dataset Management Command: Not Supported 00:23:20.095 Write Zeroes Command: Not Supported 00:23:20.095 Set Features Save Field: Not Supported 00:23:20.095 Reservations: Not Supported 00:23:20.095 Timestamp: Not Supported 00:23:20.095 Copy: Not Supported 00:23:20.095 Volatile Write Cache: Not Present 00:23:20.095 Atomic Write Unit (Normal): 1 00:23:20.095 Atomic Write Unit (PFail): 1 00:23:20.095 Atomic Compare & Write Unit: 1 00:23:20.095 Fused Compare & Write: Supported 00:23:20.095 Scatter-Gather List 00:23:20.095 SGL Command Set: Supported 00:23:20.095 SGL Keyed: Supported 00:23:20.095 SGL Bit Bucket Descriptor: Not Supported 00:23:20.095 SGL Metadata Pointer: Not Supported 00:23:20.095 Oversized SGL: Not Supported 00:23:20.095 SGL Metadata Address: Not Supported 00:23:20.095 SGL Offset: Supported 00:23:20.095 Transport SGL Data Block: Not Supported 00:23:20.095 Replay Protected Memory Block: Not Supported 00:23:20.095 00:23:20.095 Firmware Slot Information 00:23:20.095 ========================= 00:23:20.095 Active slot: 0 00:23:20.095 00:23:20.095 00:23:20.095 Error Log 00:23:20.095 ========= 00:23:20.095 00:23:20.095 Active Namespaces 00:23:20.095 ================= 00:23:20.095 Discovery Log Page 00:23:20.095 ================== 00:23:20.095 Generation Counter: 2 00:23:20.095 Number of Records: 2 00:23:20.095 Record Format: 0 00:23:20.095 00:23:20.095 Discovery Log Entry 0 00:23:20.095 ---------------------- 00:23:20.095 Transport Type: 3 (TCP) 00:23:20.095 Address Family: 1 (IPv4) 00:23:20.095 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:20.095 Entry Flags: 00:23:20.095 Duplicate Returned Information: 1 00:23:20.095 Explicit Persistent Connection Support for Discovery: 1 00:23:20.095 Transport Requirements: 00:23:20.095 Secure Channel: Not Required 00:23:20.095 Port ID: 0 (0x0000) 00:23:20.095 Controller ID: 65535 (0xffff) 00:23:20.095 Admin Max SQ Size: 128 00:23:20.095 Transport Service Identifier: 4420 00:23:20.095 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:20.095 Transport Address: 10.0.0.2 00:23:20.095 Discovery Log Entry 1 00:23:20.095 ---------------------- 00:23:20.095 Transport Type: 3 (TCP) 00:23:20.095 Address Family: 1 (IPv4) 00:23:20.095 Subsystem Type: 2 (NVM Subsystem) 00:23:20.095 Entry Flags: 00:23:20.096 Duplicate Returned Information: 0 00:23:20.096 Explicit Persistent Connection Support for Discovery: 0 00:23:20.096 Transport Requirements: 00:23:20.096 Secure Channel: Not Required 00:23:20.096 Port ID: 0 (0x0000) 00:23:20.096 Controller ID: 65535 (0xffff) 00:23:20.096 Admin Max SQ Size: 128 00:23:20.096 Transport Service Identifier: 4420 00:23:20.096 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:20.096 Transport Address: 10.0.0.2 [2024-04-26 16:05:59.656606] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:20.096 [2024-04-26 16:05:59.656625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.096 [2024-04-26 16:05:59.656635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.096 [2024-04-26 16:05:59.656643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.096 [2024-04-26 16:05:59.656650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.096 [2024-04-26 16:05:59.656663] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.656670] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.656676] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.096 [2024-04-26 16:05:59.656687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.096 [2024-04-26 16:05:59.656707] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.096 [2024-04-26 16:05:59.656860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.096 [2024-04-26 16:05:59.656874] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.096 [2024-04-26 16:05:59.656883] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.656889] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.096 [2024-04-26 16:05:59.656901] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.656907] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.656913] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.096 [2024-04-26 16:05:59.656923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.096 [2024-04-26 16:05:59.656945] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.096 [2024-04-26 16:05:59.657103] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.096 [2024-04-26 16:05:59.657116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.096 [2024-04-26 16:05:59.657121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657126] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.096 [2024-04-26 16:05:59.657137] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:20.096 [2024-04-26 16:05:59.657144] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:20.096 [2024-04-26 16:05:59.657162] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657169] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657174] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.096 [2024-04-26 16:05:59.657185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.096 [2024-04-26 16:05:59.657202] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.096 [2024-04-26 16:05:59.657483] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.096 [2024-04-26 16:05:59.657491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.096 [2024-04-26 16:05:59.657496] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657500] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.096 [2024-04-26 16:05:59.657514] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657520] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657525] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.096 [2024-04-26 16:05:59.657534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.096 [2024-04-26 16:05:59.657547] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.096 [2024-04-26 16:05:59.657689] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.096 [2024-04-26 16:05:59.657705] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.096 [2024-04-26 16:05:59.657709] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657715] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.096 [2024-04-26 16:05:59.657730] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657735] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.657741] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.096 [2024-04-26 16:05:59.657754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.096 [2024-04-26 16:05:59.657771] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.096 [2024-04-26 16:05:59.661081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.096 [2024-04-26 16:05:59.661099] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.096 [2024-04-26 16:05:59.661103] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.661109] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.096 [2024-04-26 16:05:59.661127] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.661133] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.661138] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.096 [2024-04-26 16:05:59.661149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.096 [2024-04-26 16:05:59.661166] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.096 [2024-04-26 16:05:59.661393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.096 [2024-04-26 16:05:59.661405] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.096 [2024-04-26 16:05:59.661410] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.096 [2024-04-26 16:05:59.661415] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.096 [2024-04-26 16:05:59.661427] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:23:20.096 00:23:20.096 16:05:59 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:20.096 [2024-04-26 16:05:59.750475] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:20.096 [2024-04-26 16:05:59.750542] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2531031 ] 00:23:20.445 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.445 [2024-04-26 16:05:59.796423] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:20.445 [2024-04-26 16:05:59.796547] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:20.445 [2024-04-26 16:05:59.796558] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:20.445 [2024-04-26 16:05:59.796576] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:20.445 [2024-04-26 16:05:59.796591] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:20.445 [2024-04-26 16:05:59.797130] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:20.445 [2024-04-26 16:05:59.797171] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:23:20.445 [2024-04-26 16:05:59.804089] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:20.445 [2024-04-26 16:05:59.804124] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:20.445 [2024-04-26 16:05:59.804136] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:20.445 [2024-04-26 16:05:59.804144] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:20.445 [2024-04-26 16:05:59.804196] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.804208] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.804214] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.445 [2024-04-26 16:05:59.804237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:20.445 [2024-04-26 16:05:59.804261] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.445 [2024-04-26 16:05:59.811084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.445 [2024-04-26 16:05:59.811109] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.445 [2024-04-26 16:05:59.811114] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811122] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.445 [2024-04-26 16:05:59.811141] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:20.445 [2024-04-26 16:05:59.811155] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:20.445 [2024-04-26 16:05:59.811163] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:20.445 [2024-04-26 16:05:59.811184] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811191] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811197] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.445 [2024-04-26 16:05:59.811211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-04-26 16:05:59.811236] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.445 [2024-04-26 16:05:59.811489] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.445 [2024-04-26 16:05:59.811505] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.445 [2024-04-26 16:05:59.811511] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811517] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.445 [2024-04-26 16:05:59.811529] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:20.445 [2024-04-26 16:05:59.811546] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:20.445 [2024-04-26 16:05:59.811557] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811566] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.445 [2024-04-26 16:05:59.811584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-04-26 16:05:59.811602] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.445 [2024-04-26 16:05:59.811884] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.445 [2024-04-26 16:05:59.811894] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.445 [2024-04-26 16:05:59.811901] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811906] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.445 [2024-04-26 16:05:59.811914] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:20.445 [2024-04-26 16:05:59.811928] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:20.445 [2024-04-26 16:05:59.811938] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811944] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.811950] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.445 [2024-04-26 16:05:59.811960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-04-26 16:05:59.811978] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.445 [2024-04-26 16:05:59.812121] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.445 [2024-04-26 16:05:59.812136] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.445 [2024-04-26 16:05:59.812141] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.812146] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.445 [2024-04-26 16:05:59.812155] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:20.445 [2024-04-26 16:05:59.812172] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.812178] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.812184] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.445 [2024-04-26 16:05:59.812194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.445 [2024-04-26 16:05:59.812211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.445 [2024-04-26 16:05:59.812360] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.445 [2024-04-26 16:05:59.812373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.445 [2024-04-26 16:05:59.812378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.812383] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.445 [2024-04-26 16:05:59.812394] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:20.445 [2024-04-26 16:05:59.812402] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:20.445 [2024-04-26 16:05:59.812414] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:20.445 [2024-04-26 16:05:59.812524] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:20.445 [2024-04-26 16:05:59.812532] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:20.445 [2024-04-26 16:05:59.812550] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.445 [2024-04-26 16:05:59.812556] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.812562] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.812573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-04-26 16:05:59.812593] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.446 [2024-04-26 16:05:59.812877] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.446 [2024-04-26 16:05:59.812887] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.446 [2024-04-26 16:05:59.812892] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.812897] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.446 [2024-04-26 16:05:59.812905] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:20.446 [2024-04-26 16:05:59.812918] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.812924] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.812929] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.812941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-04-26 16:05:59.812956] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.446 [2024-04-26 16:05:59.813104] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.446 [2024-04-26 16:05:59.813119] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.446 [2024-04-26 16:05:59.813124] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.813130] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.446 [2024-04-26 16:05:59.813137] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:20.446 [2024-04-26 16:05:59.813145] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:20.446 [2024-04-26 16:05:59.813157] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:20.446 [2024-04-26 16:05:59.813168] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:20.446 [2024-04-26 16:05:59.813187] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.813193] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.813205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-04-26 16:05:59.813222] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.446 [2024-04-26 16:05:59.813469] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.446 [2024-04-26 16:05:59.813484] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.446 [2024-04-26 16:05:59.813489] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.813495] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:23:20.446 [2024-04-26 16:05:59.813505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:20.446 [2024-04-26 16:05:59.813511] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.813523] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.813531] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857083] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.446 [2024-04-26 16:05:59.857104] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.446 [2024-04-26 16:05:59.857110] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857119] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.446 [2024-04-26 16:05:59.857135] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:20.446 [2024-04-26 16:05:59.857143] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:20.446 [2024-04-26 16:05:59.857149] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:20.446 [2024-04-26 16:05:59.857156] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:20.446 [2024-04-26 16:05:59.857162] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:20.446 [2024-04-26 16:05:59.857173] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:20.446 [2024-04-26 16:05:59.857188] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:20.446 [2024-04-26 16:05:59.857202] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857208] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857214] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.857227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.446 [2024-04-26 16:05:59.857246] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.446 [2024-04-26 16:05:59.857476] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.446 [2024-04-26 16:05:59.857490] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.446 [2024-04-26 16:05:59.857495] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857500] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:20.446 [2024-04-26 16:05:59.857511] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857518] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857523] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.857537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.446 [2024-04-26 16:05:59.857547] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857553] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857557] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.857566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.446 [2024-04-26 16:05:59.857573] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857579] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857583] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.857592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.446 [2024-04-26 16:05:59.857598] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857603] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857608] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.857616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.446 [2024-04-26 16:05:59.857625] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:20.446 [2024-04-26 16:05:59.857645] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:20.446 [2024-04-26 16:05:59.857655] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.857660] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.446 [2024-04-26 16:05:59.857670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.446 [2024-04-26 16:05:59.857689] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:20.446 [2024-04-26 16:05:59.857696] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:23:20.446 [2024-04-26 16:05:59.857702] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:23:20.446 [2024-04-26 16:05:59.857708] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.446 [2024-04-26 16:05:59.857714] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.446 [2024-04-26 16:05:59.857984] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.446 [2024-04-26 16:05:59.857997] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.446 [2024-04-26 16:05:59.858002] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.446 [2024-04-26 16:05:59.858008] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.447 [2024-04-26 16:05:59.858016] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:20.447 [2024-04-26 16:05:59.858024] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.858036] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.858044] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.858053] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.858063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.858075] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.447 [2024-04-26 16:05:59.858087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:20.447 [2024-04-26 16:05:59.858103] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.447 [2024-04-26 16:05:59.858329] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.447 [2024-04-26 16:05:59.858342] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.447 [2024-04-26 16:05:59.858347] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.858352] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.447 [2024-04-26 16:05:59.858414] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.858437] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.858451] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.858457] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.447 [2024-04-26 16:05:59.858471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-04-26 16:05:59.858489] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.447 [2024-04-26 16:05:59.858725] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.447 [2024-04-26 16:05:59.858739] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.447 [2024-04-26 16:05:59.858744] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.858750] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:23:20.447 [2024-04-26 16:05:59.858757] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:20.447 [2024-04-26 16:05:59.858763] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.858939] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.858945] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.899212] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.447 [2024-04-26 16:05:59.899232] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.447 [2024-04-26 16:05:59.899237] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.899243] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.447 [2024-04-26 16:05:59.899272] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:20.447 [2024-04-26 16:05:59.899296] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.899312] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.899325] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.899332] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.447 [2024-04-26 16:05:59.899344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-04-26 16:05:59.899362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.447 [2024-04-26 16:05:59.899690] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.447 [2024-04-26 16:05:59.899703] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.447 [2024-04-26 16:05:59.899708] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.899714] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:23:20.447 [2024-04-26 16:05:59.899720] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:20.447 [2024-04-26 16:05:59.899731] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.899886] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.899892] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.944080] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.447 [2024-04-26 16:05:59.944100] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.447 [2024-04-26 16:05:59.944105] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.944111] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.447 [2024-04-26 16:05:59.944136] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.944154] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.944169] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.944176] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.447 [2024-04-26 16:05:59.944192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-04-26 16:05:59.944211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.447 [2024-04-26 16:05:59.944459] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.447 [2024-04-26 16:05:59.944474] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.447 [2024-04-26 16:05:59.944479] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.944484] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:23:20.447 [2024-04-26 16:05:59.944490] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:20.447 [2024-04-26 16:05:59.944496] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.944667] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.944674] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.986271] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.447 [2024-04-26 16:05:59.986291] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.447 [2024-04-26 16:05:59.986297] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.986303] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.447 [2024-04-26 16:05:59.986322] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.986335] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.986348] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.986357] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.986365] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.986372] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:20.447 [2024-04-26 16:05:59.986379] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:20.447 [2024-04-26 16:05:59.986386] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:20.447 [2024-04-26 16:05:59.986415] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.986422] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.447 [2024-04-26 16:05:59.986434] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.447 [2024-04-26 16:05:59.986446] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.986453] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.447 [2024-04-26 16:05:59.986458] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:20.447 [2024-04-26 16:05:59.986470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.447 [2024-04-26 16:05:59.986492] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.447 [2024-04-26 16:05:59.986500] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:20.447 [2024-04-26 16:05:59.986806] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.986815] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.986821] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.986827] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.448 [2024-04-26 16:05:59.986836] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.986846] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.986851] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.986856] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:20.448 [2024-04-26 16:05:59.986868] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.986874] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:20.448 [2024-04-26 16:05:59.986883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-04-26 16:05:59.986897] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:20.448 [2024-04-26 16:05:59.987039] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.987052] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.987057] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987063] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:20.448 [2024-04-26 16:05:59.987087] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987094] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:20.448 [2024-04-26 16:05:59.987104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-04-26 16:05:59.987121] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:20.448 [2024-04-26 16:05:59.987257] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.987269] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.987273] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987279] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:20.448 [2024-04-26 16:05:59.987293] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987299] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:20.448 [2024-04-26 16:05:59.987309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-04-26 16:05:59.987325] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:20.448 [2024-04-26 16:05:59.987457] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.987469] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.987474] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987479] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:20.448 [2024-04-26 16:05:59.987507] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987517] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:20.448 [2024-04-26 16:05:59.987530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-04-26 16:05:59.987549] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987556] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:20.448 [2024-04-26 16:05:59.987565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-04-26 16:05:59.987575] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987581] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:23:20.448 [2024-04-26 16:05:59.987590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-04-26 16:05:59.987604] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.987610] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:23:20.448 [2024-04-26 16:05:59.987619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.448 [2024-04-26 16:05:59.987638] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:20.448 [2024-04-26 16:05:59.987648] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:20.448 [2024-04-26 16:05:59.987654] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:23:20.448 [2024-04-26 16:05:59.987660] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:23:20.448 [2024-04-26 16:05:59.988024] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.448 [2024-04-26 16:05:59.988039] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.448 [2024-04-26 16:05:59.988045] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988050] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:23:20.448 [2024-04-26 16:05:59.988057] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:23:20.448 [2024-04-26 16:05:59.988064] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988088] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988095] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988103] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.448 [2024-04-26 16:05:59.988110] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.448 [2024-04-26 16:05:59.988115] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988120] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:23:20.448 [2024-04-26 16:05:59.988125] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:23:20.448 [2024-04-26 16:05:59.988131] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988139] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988144] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988150] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.448 [2024-04-26 16:05:59.988160] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.448 [2024-04-26 16:05:59.988167] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988172] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:23:20.448 [2024-04-26 16:05:59.988178] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:23:20.448 [2024-04-26 16:05:59.988183] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988191] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988195] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988202] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:20.448 [2024-04-26 16:05:59.988209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:20.448 [2024-04-26 16:05:59.988214] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988219] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:23:20.448 [2024-04-26 16:05:59.988224] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:20.448 [2024-04-26 16:05:59.988229] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988238] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988243] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988259] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.988266] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.988271] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988277] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:20.448 [2024-04-26 16:05:59.988302] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.988310] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.988319] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988324] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:20.448 [2024-04-26 16:05:59.988337] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.988345] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.988349] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988355] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:23:20.448 [2024-04-26 16:05:59.988366] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.448 [2024-04-26 16:05:59.988373] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.448 [2024-04-26 16:05:59.988378] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.448 [2024-04-26 16:05:59.988383] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:23:20.448 ===================================================== 00:23:20.448 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.449 ===================================================== 00:23:20.449 Controller Capabilities/Features 00:23:20.449 ================================ 00:23:20.449 Vendor ID: 8086 00:23:20.449 Subsystem Vendor ID: 8086 00:23:20.449 Serial Number: SPDK00000000000001 00:23:20.449 Model Number: SPDK bdev Controller 00:23:20.449 Firmware Version: 24.05 00:23:20.449 Recommended Arb Burst: 6 00:23:20.449 IEEE OUI Identifier: e4 d2 5c 00:23:20.449 Multi-path I/O 00:23:20.449 May have multiple subsystem ports: Yes 00:23:20.449 May have multiple controllers: Yes 00:23:20.449 Associated with SR-IOV VF: No 00:23:20.449 Max Data Transfer Size: 131072 00:23:20.449 Max Number of Namespaces: 32 00:23:20.449 Max Number of I/O Queues: 127 00:23:20.449 NVMe Specification Version (VS): 1.3 00:23:20.449 NVMe Specification Version (Identify): 1.3 00:23:20.449 Maximum Queue Entries: 128 00:23:20.449 Contiguous Queues Required: Yes 00:23:20.449 Arbitration Mechanisms Supported 00:23:20.449 Weighted Round Robin: Not Supported 00:23:20.449 Vendor Specific: Not Supported 00:23:20.449 Reset Timeout: 15000 ms 00:23:20.449 Doorbell Stride: 4 bytes 00:23:20.449 NVM Subsystem Reset: Not Supported 00:23:20.449 Command Sets Supported 00:23:20.449 NVM Command Set: Supported 00:23:20.449 Boot Partition: Not Supported 00:23:20.449 Memory Page Size Minimum: 4096 bytes 00:23:20.449 Memory Page Size Maximum: 4096 bytes 00:23:20.449 Persistent Memory Region: Not Supported 00:23:20.449 Optional Asynchronous Events Supported 00:23:20.449 Namespace Attribute Notices: Supported 00:23:20.449 Firmware Activation Notices: Not Supported 00:23:20.449 ANA Change Notices: Not Supported 00:23:20.449 PLE Aggregate Log Change Notices: Not Supported 00:23:20.449 LBA Status Info Alert Notices: Not Supported 00:23:20.449 EGE Aggregate Log Change Notices: Not Supported 00:23:20.449 Normal NVM Subsystem Shutdown event: Not Supported 00:23:20.449 Zone Descriptor Change Notices: Not Supported 00:23:20.449 Discovery Log Change Notices: Not Supported 00:23:20.449 Controller Attributes 00:23:20.449 128-bit Host Identifier: Supported 00:23:20.449 Non-Operational Permissive Mode: Not Supported 00:23:20.449 NVM Sets: Not Supported 00:23:20.449 Read Recovery Levels: Not Supported 00:23:20.449 Endurance Groups: Not Supported 00:23:20.449 Predictable Latency Mode: Not Supported 00:23:20.449 Traffic Based Keep ALive: Not Supported 00:23:20.449 Namespace Granularity: Not Supported 00:23:20.449 SQ Associations: Not Supported 00:23:20.449 UUID List: Not Supported 00:23:20.449 Multi-Domain Subsystem: Not Supported 00:23:20.449 Fixed Capacity Management: Not Supported 00:23:20.449 Variable Capacity Management: Not Supported 00:23:20.449 Delete Endurance Group: Not Supported 00:23:20.449 Delete NVM Set: Not Supported 00:23:20.449 Extended LBA Formats Supported: Not Supported 00:23:20.449 Flexible Data Placement Supported: Not Supported 00:23:20.449 00:23:20.449 Controller Memory Buffer Support 00:23:20.449 ================================ 00:23:20.449 Supported: No 00:23:20.449 00:23:20.449 Persistent Memory Region Support 00:23:20.449 ================================ 00:23:20.449 Supported: No 00:23:20.449 00:23:20.449 Admin Command Set Attributes 00:23:20.449 ============================ 00:23:20.449 Security Send/Receive: Not Supported 00:23:20.449 Format NVM: Not Supported 00:23:20.449 Firmware Activate/Download: Not Supported 00:23:20.449 Namespace Management: Not Supported 00:23:20.449 Device Self-Test: Not Supported 00:23:20.449 Directives: Not Supported 00:23:20.449 NVMe-MI: Not Supported 00:23:20.449 Virtualization Management: Not Supported 00:23:20.449 Doorbell Buffer Config: Not Supported 00:23:20.449 Get LBA Status Capability: Not Supported 00:23:20.449 Command & Feature Lockdown Capability: Not Supported 00:23:20.449 Abort Command Limit: 4 00:23:20.449 Async Event Request Limit: 4 00:23:20.449 Number of Firmware Slots: N/A 00:23:20.449 Firmware Slot 1 Read-Only: N/A 00:23:20.449 Firmware Activation Without Reset: N/A 00:23:20.449 Multiple Update Detection Support: N/A 00:23:20.449 Firmware Update Granularity: No Information Provided 00:23:20.449 Per-Namespace SMART Log: No 00:23:20.449 Asymmetric Namespace Access Log Page: Not Supported 00:23:20.449 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:20.449 Command Effects Log Page: Supported 00:23:20.449 Get Log Page Extended Data: Supported 00:23:20.449 Telemetry Log Pages: Not Supported 00:23:20.449 Persistent Event Log Pages: Not Supported 00:23:20.449 Supported Log Pages Log Page: May Support 00:23:20.449 Commands Supported & Effects Log Page: Not Supported 00:23:20.449 Feature Identifiers & Effects Log Page:May Support 00:23:20.449 NVMe-MI Commands & Effects Log Page: May Support 00:23:20.449 Data Area 4 for Telemetry Log: Not Supported 00:23:20.449 Error Log Page Entries Supported: 128 00:23:20.449 Keep Alive: Supported 00:23:20.449 Keep Alive Granularity: 10000 ms 00:23:20.449 00:23:20.449 NVM Command Set Attributes 00:23:20.449 ========================== 00:23:20.449 Submission Queue Entry Size 00:23:20.449 Max: 64 00:23:20.449 Min: 64 00:23:20.449 Completion Queue Entry Size 00:23:20.449 Max: 16 00:23:20.449 Min: 16 00:23:20.449 Number of Namespaces: 32 00:23:20.449 Compare Command: Supported 00:23:20.449 Write Uncorrectable Command: Not Supported 00:23:20.449 Dataset Management Command: Supported 00:23:20.449 Write Zeroes Command: Supported 00:23:20.449 Set Features Save Field: Not Supported 00:23:20.449 Reservations: Supported 00:23:20.449 Timestamp: Not Supported 00:23:20.449 Copy: Supported 00:23:20.449 Volatile Write Cache: Present 00:23:20.449 Atomic Write Unit (Normal): 1 00:23:20.449 Atomic Write Unit (PFail): 1 00:23:20.449 Atomic Compare & Write Unit: 1 00:23:20.449 Fused Compare & Write: Supported 00:23:20.449 Scatter-Gather List 00:23:20.449 SGL Command Set: Supported 00:23:20.449 SGL Keyed: Supported 00:23:20.449 SGL Bit Bucket Descriptor: Not Supported 00:23:20.449 SGL Metadata Pointer: Not Supported 00:23:20.449 Oversized SGL: Not Supported 00:23:20.449 SGL Metadata Address: Not Supported 00:23:20.449 SGL Offset: Supported 00:23:20.449 Transport SGL Data Block: Not Supported 00:23:20.449 Replay Protected Memory Block: Not Supported 00:23:20.449 00:23:20.449 Firmware Slot Information 00:23:20.449 ========================= 00:23:20.449 Active slot: 1 00:23:20.449 Slot 1 Firmware Revision: 24.05 00:23:20.449 00:23:20.449 00:23:20.449 Commands Supported and Effects 00:23:20.449 ============================== 00:23:20.449 Admin Commands 00:23:20.449 -------------- 00:23:20.449 Get Log Page (02h): Supported 00:23:20.449 Identify (06h): Supported 00:23:20.449 Abort (08h): Supported 00:23:20.449 Set Features (09h): Supported 00:23:20.449 Get Features (0Ah): Supported 00:23:20.449 Asynchronous Event Request (0Ch): Supported 00:23:20.449 Keep Alive (18h): Supported 00:23:20.449 I/O Commands 00:23:20.449 ------------ 00:23:20.449 Flush (00h): Supported LBA-Change 00:23:20.449 Write (01h): Supported LBA-Change 00:23:20.449 Read (02h): Supported 00:23:20.449 Compare (05h): Supported 00:23:20.449 Write Zeroes (08h): Supported LBA-Change 00:23:20.449 Dataset Management (09h): Supported LBA-Change 00:23:20.449 Copy (19h): Supported LBA-Change 00:23:20.449 Unknown (79h): Supported LBA-Change 00:23:20.449 Unknown (7Ah): Supported 00:23:20.449 00:23:20.449 Error Log 00:23:20.449 ========= 00:23:20.449 00:23:20.449 Arbitration 00:23:20.449 =========== 00:23:20.449 Arbitration Burst: 1 00:23:20.449 00:23:20.449 Power Management 00:23:20.449 ================ 00:23:20.449 Number of Power States: 1 00:23:20.449 Current Power State: Power State #0 00:23:20.449 Power State #0: 00:23:20.449 Max Power: 0.00 W 00:23:20.449 Non-Operational State: Operational 00:23:20.449 Entry Latency: Not Reported 00:23:20.449 Exit Latency: Not Reported 00:23:20.449 Relative Read Throughput: 0 00:23:20.449 Relative Read Latency: 0 00:23:20.449 Relative Write Throughput: 0 00:23:20.449 Relative Write Latency: 0 00:23:20.449 Idle Power: Not Reported 00:23:20.449 Active Power: Not Reported 00:23:20.449 Non-Operational Permissive Mode: Not Supported 00:23:20.449 00:23:20.449 Health Information 00:23:20.449 ================== 00:23:20.449 Critical Warnings: 00:23:20.450 Available Spare Space: OK 00:23:20.450 Temperature: OK 00:23:20.450 Device Reliability: OK 00:23:20.450 Read Only: No 00:23:20.450 Volatile Memory Backup: OK 00:23:20.450 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:20.450 Temperature Threshold: [2024-04-26 16:05:59.988530] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.988540] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:23:20.450 [2024-04-26 16:05:59.988551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-04-26 16:05:59.988569] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:23:20.450 [2024-04-26 16:05:59.988717] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.450 [2024-04-26 16:05:59.988730] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.450 [2024-04-26 16:05:59.988735] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.988744] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:23:20.450 [2024-04-26 16:05:59.988792] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:20.450 [2024-04-26 16:05:59.988809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-04-26 16:05:59.988819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-04-26 16:05:59.988826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-04-26 16:05:59.988834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.450 [2024-04-26 16:05:59.988845] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.988851] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.988862] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.450 [2024-04-26 16:05:59.988873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-04-26 16:05:59.988891] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.450 [2024-04-26 16:05:59.989034] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.450 [2024-04-26 16:05:59.989047] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.450 [2024-04-26 16:05:59.989053] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989058] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.450 [2024-04-26 16:05:59.989077] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989084] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989093] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.450 [2024-04-26 16:05:59.989104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-04-26 16:05:59.989126] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.450 [2024-04-26 16:05:59.989282] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.450 [2024-04-26 16:05:59.989294] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.450 [2024-04-26 16:05:59.989299] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.450 [2024-04-26 16:05:59.989313] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:20.450 [2024-04-26 16:05:59.989320] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:20.450 [2024-04-26 16:05:59.989335] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989347] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.450 [2024-04-26 16:05:59.989357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-04-26 16:05:59.989378] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.450 [2024-04-26 16:05:59.989645] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.450 [2024-04-26 16:05:59.989653] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.450 [2024-04-26 16:05:59.989658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.450 [2024-04-26 16:05:59.989679] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989690] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.450 [2024-04-26 16:05:59.989702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-04-26 16:05:59.989716] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.450 [2024-04-26 16:05:59.989847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.450 [2024-04-26 16:05:59.989859] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.450 [2024-04-26 16:05:59.989864] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989869] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.450 [2024-04-26 16:05:59.989885] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989891] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.989896] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.450 [2024-04-26 16:05:59.989906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-04-26 16:05:59.989922] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.450 [2024-04-26 16:05:59.990053] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.450 [2024-04-26 16:05:59.990065] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.450 [2024-04-26 16:05:59.994112] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.994126] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.450 [2024-04-26 16:05:59.994146] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.994152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.994157] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:20.450 [2024-04-26 16:05:59.994167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.450 [2024-04-26 16:05:59.994185] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:20.450 [2024-04-26 16:05:59.994395] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:20.450 [2024-04-26 16:05:59.994413] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:20.450 [2024-04-26 16:05:59.994419] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:20.450 [2024-04-26 16:05:59.994424] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:20.450 [2024-04-26 16:05:59.994437] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:20.450 0 Kelvin (-273 Celsius) 00:23:20.450 Available Spare: 0% 00:23:20.450 Available Spare Threshold: 0% 00:23:20.451 Life Percentage Used: 0% 00:23:20.451 Data Units Read: 0 00:23:20.451 Data Units Written: 0 00:23:20.451 Host Read Commands: 0 00:23:20.451 Host Write Commands: 0 00:23:20.451 Controller Busy Time: 0 minutes 00:23:20.451 Power Cycles: 0 00:23:20.451 Power On Hours: 0 hours 00:23:20.451 Unsafe Shutdowns: 0 00:23:20.451 Unrecoverable Media Errors: 0 00:23:20.451 Lifetime Error Log Entries: 0 00:23:20.451 Warning Temperature Time: 0 minutes 00:23:20.451 Critical Temperature Time: 0 minutes 00:23:20.451 00:23:20.451 Number of Queues 00:23:20.451 ================ 00:23:20.451 Number of I/O Submission Queues: 127 00:23:20.451 Number of I/O Completion Queues: 127 00:23:20.451 00:23:20.451 Active Namespaces 00:23:20.451 ================= 00:23:20.451 Namespace ID:1 00:23:20.451 Error Recovery Timeout: Unlimited 00:23:20.451 Command Set Identifier: NVM (00h) 00:23:20.451 Deallocate: Supported 00:23:20.451 Deallocated/Unwritten Error: Not Supported 00:23:20.451 Deallocated Read Value: Unknown 00:23:20.451 Deallocate in Write Zeroes: Not Supported 00:23:20.451 Deallocated Guard Field: 0xFFFF 00:23:20.451 Flush: Supported 00:23:20.451 Reservation: Supported 00:23:20.451 Namespace Sharing Capabilities: Multiple Controllers 00:23:20.451 Size (in LBAs): 131072 (0GiB) 00:23:20.451 Capacity (in LBAs): 131072 (0GiB) 00:23:20.451 Utilization (in LBAs): 131072 (0GiB) 00:23:20.451 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:20.451 EUI64: ABCDEF0123456789 00:23:20.451 UUID: 0779a0a8-d854-4fbe-9c90-1dd30c75032c 00:23:20.451 Thin Provisioning: Not Supported 00:23:20.451 Per-NS Atomic Units: Yes 00:23:20.451 Atomic Boundary Size (Normal): 0 00:23:20.451 Atomic Boundary Size (PFail): 0 00:23:20.451 Atomic Boundary Offset: 0 00:23:20.451 Maximum Single Source Range Length: 65535 00:23:20.451 Maximum Copy Length: 65535 00:23:20.451 Maximum Source Range Count: 1 00:23:20.451 NGUID/EUI64 Never Reused: No 00:23:20.451 Namespace Write Protected: No 00:23:20.451 Number of LBA Formats: 1 00:23:20.451 Current LBA Format: LBA Format #00 00:23:20.451 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:20.451 00:23:20.451 16:06:00 -- host/identify.sh@51 -- # sync 00:23:20.451 16:06:00 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.451 16:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:20.451 16:06:00 -- common/autotest_common.sh@10 -- # set +x 00:23:20.451 16:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.451 16:06:00 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:20.451 16:06:00 -- host/identify.sh@56 -- # nvmftestfini 00:23:20.451 16:06:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:20.451 16:06:00 -- nvmf/common.sh@117 -- # sync 00:23:20.451 16:06:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.451 16:06:00 -- nvmf/common.sh@120 -- # set +e 00:23:20.451 16:06:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.451 16:06:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.451 rmmod nvme_tcp 00:23:20.451 rmmod nvme_fabrics 00:23:20.451 rmmod nvme_keyring 00:23:20.451 16:06:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.451 16:06:00 -- nvmf/common.sh@124 -- # set -e 00:23:20.451 16:06:00 -- nvmf/common.sh@125 -- # return 0 00:23:20.451 16:06:00 -- nvmf/common.sh@478 -- # '[' -n 2530783 ']' 00:23:20.451 16:06:00 -- nvmf/common.sh@479 -- # killprocess 2530783 00:23:20.451 16:06:00 -- common/autotest_common.sh@936 -- # '[' -z 2530783 ']' 00:23:20.451 16:06:00 -- common/autotest_common.sh@940 -- # kill -0 2530783 00:23:20.451 16:06:00 -- common/autotest_common.sh@941 -- # uname 00:23:20.716 16:06:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.716 16:06:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2530783 00:23:20.716 16:06:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:20.716 16:06:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:20.716 16:06:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2530783' 00:23:20.716 killing process with pid 2530783 00:23:20.716 16:06:00 -- common/autotest_common.sh@955 -- # kill 2530783 00:23:20.716 [2024-04-26 16:06:00.157148] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:20.716 16:06:00 -- common/autotest_common.sh@960 -- # wait 2530783 00:23:22.094 16:06:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:22.094 16:06:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:22.094 16:06:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:22.094 16:06:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.094 16:06:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.094 16:06:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.094 16:06:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.094 16:06:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.629 16:06:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.629 00:23:24.629 real 0m11.030s 00:23:24.629 user 0m11.602s 00:23:24.629 sys 0m4.781s 00:23:24.629 16:06:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:24.629 16:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.629 ************************************ 00:23:24.629 END TEST nvmf_identify 00:23:24.629 ************************************ 00:23:24.629 16:06:03 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:24.629 16:06:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:24.629 16:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:24.629 16:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.629 ************************************ 00:23:24.629 START TEST nvmf_perf 00:23:24.629 ************************************ 00:23:24.629 16:06:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:24.629 * Looking for test storage... 00:23:24.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.629 16:06:03 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.629 16:06:03 -- nvmf/common.sh@7 -- # uname -s 00:23:24.629 16:06:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.629 16:06:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.629 16:06:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.629 16:06:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.629 16:06:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.629 16:06:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.629 16:06:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.629 16:06:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.629 16:06:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.629 16:06:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.629 16:06:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.629 16:06:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.629 16:06:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.629 16:06:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.629 16:06:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.629 16:06:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.629 16:06:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.629 16:06:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.629 16:06:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.629 16:06:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.629 16:06:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.629 16:06:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.629 16:06:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.629 16:06:04 -- paths/export.sh@5 -- # export PATH 00:23:24.629 16:06:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.629 16:06:04 -- nvmf/common.sh@47 -- # : 0 00:23:24.629 16:06:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.629 16:06:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.629 16:06:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.629 16:06:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.629 16:06:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.629 16:06:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.629 16:06:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.629 16:06:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.629 16:06:04 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:24.629 16:06:04 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:24.629 16:06:04 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:24.629 16:06:04 -- host/perf.sh@17 -- # nvmftestinit 00:23:24.629 16:06:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:24.629 16:06:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.629 16:06:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:24.629 16:06:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:24.629 16:06:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:24.629 16:06:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.629 16:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.629 16:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.629 16:06:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:24.629 16:06:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:24.629 16:06:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:24.629 16:06:04 -- common/autotest_common.sh@10 -- # set +x 00:23:29.943 16:06:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:29.943 16:06:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.943 16:06:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.943 16:06:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.943 16:06:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.943 16:06:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.943 16:06:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.943 16:06:09 -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.943 16:06:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.943 16:06:09 -- nvmf/common.sh@296 -- # e810=() 00:23:29.943 16:06:09 -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.943 16:06:09 -- nvmf/common.sh@297 -- # x722=() 00:23:29.943 16:06:09 -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.943 16:06:09 -- nvmf/common.sh@298 -- # mlx=() 00:23:29.943 16:06:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.943 16:06:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.943 16:06:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.943 16:06:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.943 16:06:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.943 16:06:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.943 16:06:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:29.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:29.943 16:06:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.943 16:06:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:29.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:29.943 16:06:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.943 16:06:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.943 16:06:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.943 16:06:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:29.943 16:06:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.943 16:06:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:29.943 Found net devices under 0000:86:00.0: cvl_0_0 00:23:29.943 16:06:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.943 16:06:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.943 16:06:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.943 16:06:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:29.943 16:06:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.943 16:06:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:29.943 Found net devices under 0000:86:00.1: cvl_0_1 00:23:29.943 16:06:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.943 16:06:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:29.943 16:06:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:29.943 16:06:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:29.943 16:06:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:29.943 16:06:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.943 16:06:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.943 16:06:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.943 16:06:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.943 16:06:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.943 16:06:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.943 16:06:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.943 16:06:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.943 16:06:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.943 16:06:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.943 16:06:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.943 16:06:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.943 16:06:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.944 16:06:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.944 16:06:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.944 16:06:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.944 16:06:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.944 16:06:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.944 16:06:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.944 16:06:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:23:29.944 00:23:29.944 --- 10.0.0.2 ping statistics --- 00:23:29.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.944 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:23:29.944 16:06:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:23:29.944 00:23:29.944 --- 10.0.0.1 ping statistics --- 00:23:29.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.944 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:23:29.944 16:06:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.944 16:06:09 -- nvmf/common.sh@411 -- # return 0 00:23:29.944 16:06:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:29.944 16:06:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.944 16:06:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:29.944 16:06:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:29.944 16:06:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.944 16:06:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:29.944 16:06:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:29.944 16:06:09 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:29.944 16:06:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:29.944 16:06:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:29.944 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:29.944 16:06:09 -- nvmf/common.sh@470 -- # nvmfpid=2535198 00:23:29.944 16:06:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.944 16:06:09 -- nvmf/common.sh@471 -- # waitforlisten 2535198 00:23:29.944 16:06:09 -- common/autotest_common.sh@817 -- # '[' -z 2535198 ']' 00:23:29.944 16:06:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.944 16:06:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:29.944 16:06:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.944 16:06:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:29.944 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:29.944 [2024-04-26 16:06:09.556741] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:29.944 [2024-04-26 16:06:09.556824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.944 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.203 [2024-04-26 16:06:09.666332] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.462 [2024-04-26 16:06:09.891257] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.462 [2024-04-26 16:06:09.891306] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.462 [2024-04-26 16:06:09.891316] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.462 [2024-04-26 16:06:09.891327] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.462 [2024-04-26 16:06:09.891335] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.462 [2024-04-26 16:06:09.891598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.462 [2024-04-26 16:06:09.891614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.462 [2024-04-26 16:06:09.891717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.462 [2024-04-26 16:06:09.891726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.721 16:06:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:30.721 16:06:10 -- common/autotest_common.sh@850 -- # return 0 00:23:30.721 16:06:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:30.721 16:06:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:30.721 16:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.721 16:06:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.721 16:06:10 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:30.721 16:06:10 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:34.008 16:06:13 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:34.008 16:06:13 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:34.008 16:06:13 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:34.008 16:06:13 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:34.267 16:06:13 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:34.267 16:06:13 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:34.267 16:06:13 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:34.267 16:06:13 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:34.267 16:06:13 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.525 [2024-04-26 16:06:14.047989] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.525 16:06:14 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.784 16:06:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:34.784 16:06:14 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.784 16:06:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:34.784 16:06:14 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:35.042 16:06:14 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.301 [2024-04-26 16:06:14.814131] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.301 16:06:14 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:35.559 16:06:15 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:35.559 16:06:15 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:35.559 16:06:15 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:35.559 16:06:15 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:36.937 Initializing NVMe Controllers 00:23:36.937 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:36.937 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:36.937 Initialization complete. Launching workers. 00:23:36.937 ======================================================== 00:23:36.937 Latency(us) 00:23:36.937 Device Information : IOPS MiB/s Average min max 00:23:36.937 PCIE (0000:5e:00.0) NSID 1 from core 0: 89523.67 349.70 357.05 38.86 7223.88 00:23:36.937 ======================================================== 00:23:36.937 Total : 89523.67 349.70 357.05 38.86 7223.88 00:23:36.937 00:23:36.937 16:06:16 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:36.937 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.317 Initializing NVMe Controllers 00:23:38.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:38.317 Initialization complete. Launching workers. 00:23:38.317 ======================================================== 00:23:38.317 Latency(us) 00:23:38.317 Device Information : IOPS MiB/s Average min max 00:23:38.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.00 0.38 10312.19 454.68 45562.65 00:23:38.317 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.00 0.25 16213.12 7840.75 47892.13 00:23:38.317 ======================================================== 00:23:38.317 Total : 160.00 0.62 12635.68 454.68 47892.13 00:23:38.317 00:23:38.317 16:06:17 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:38.317 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.696 Initializing NVMe Controllers 00:23:39.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:39.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:39.696 Initialization complete. Launching workers. 00:23:39.696 ======================================================== 00:23:39.696 Latency(us) 00:23:39.696 Device Information : IOPS MiB/s Average min max 00:23:39.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7006.09 27.37 4567.50 865.67 9910.14 00:23:39.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3808.33 14.88 8416.30 4378.10 18863.82 00:23:39.696 ======================================================== 00:23:39.696 Total : 10814.42 42.24 5922.87 865.67 18863.82 00:23:39.696 00:23:39.696 16:06:19 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:39.696 16:06:19 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:39.696 16:06:19 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:39.955 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.245 Initializing NVMe Controllers 00:23:43.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.246 Controller IO queue size 128, less than required. 00:23:43.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.246 Controller IO queue size 128, less than required. 00:23:43.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.246 Initialization complete. Launching workers. 00:23:43.246 ======================================================== 00:23:43.246 Latency(us) 00:23:43.246 Device Information : IOPS MiB/s Average min max 00:23:43.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 823.00 205.75 165266.02 92914.60 383203.73 00:23:43.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 542.00 135.50 252784.63 125169.62 534884.67 00:23:43.246 ======================================================== 00:23:43.246 Total : 1365.00 341.25 200016.99 92914.60 534884.67 00:23:43.246 00:23:43.246 16:06:22 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:43.246 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.246 No valid NVMe controllers or AIO or URING devices found 00:23:43.246 Initializing NVMe Controllers 00:23:43.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.246 Controller IO queue size 128, less than required. 00:23:43.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.246 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:43.246 Controller IO queue size 128, less than required. 00:23:43.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.246 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:43.246 WARNING: Some requested NVMe devices were skipped 00:23:43.246 16:06:22 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:43.246 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.534 Initializing NVMe Controllers 00:23:46.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.534 Controller IO queue size 128, less than required. 00:23:46.534 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:46.534 Controller IO queue size 128, less than required. 00:23:46.534 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:46.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:46.534 Initialization complete. Launching workers. 00:23:46.534 00:23:46.534 ==================== 00:23:46.534 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:46.534 TCP transport: 00:23:46.534 polls: 42941 00:23:46.534 idle_polls: 12317 00:23:46.534 sock_completions: 30624 00:23:46.534 nvme_completions: 3179 00:23:46.534 submitted_requests: 4716 00:23:46.534 queued_requests: 1 00:23:46.534 00:23:46.534 ==================== 00:23:46.534 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:46.534 TCP transport: 00:23:46.534 polls: 42945 00:23:46.534 idle_polls: 15069 00:23:46.534 sock_completions: 27876 00:23:46.534 nvme_completions: 3281 00:23:46.534 submitted_requests: 4928 00:23:46.534 queued_requests: 1 00:23:46.534 ======================================================== 00:23:46.534 Latency(us) 00:23:46.534 Device Information : IOPS MiB/s Average min max 00:23:46.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 794.49 198.62 179077.24 86745.63 489942.31 00:23:46.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 819.99 205.00 157950.81 69545.85 374877.99 00:23:46.534 ======================================================== 00:23:46.534 Total : 1614.48 403.62 168347.19 69545.85 489942.31 00:23:46.534 00:23:46.534 16:06:25 -- host/perf.sh@66 -- # sync 00:23:46.534 16:06:25 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.534 16:06:26 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:46.534 16:06:26 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:46.534 16:06:26 -- host/perf.sh@114 -- # nvmftestfini 00:23:46.534 16:06:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:46.534 16:06:26 -- nvmf/common.sh@117 -- # sync 00:23:46.534 16:06:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.534 16:06:26 -- nvmf/common.sh@120 -- # set +e 00:23:46.534 16:06:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.534 16:06:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.534 rmmod nvme_tcp 00:23:46.534 rmmod nvme_fabrics 00:23:46.534 rmmod nvme_keyring 00:23:46.534 16:06:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.534 16:06:26 -- nvmf/common.sh@124 -- # set -e 00:23:46.534 16:06:26 -- nvmf/common.sh@125 -- # return 0 00:23:46.534 16:06:26 -- nvmf/common.sh@478 -- # '[' -n 2535198 ']' 00:23:46.534 16:06:26 -- nvmf/common.sh@479 -- # killprocess 2535198 00:23:46.534 16:06:26 -- common/autotest_common.sh@936 -- # '[' -z 2535198 ']' 00:23:46.534 16:06:26 -- common/autotest_common.sh@940 -- # kill -0 2535198 00:23:46.534 16:06:26 -- common/autotest_common.sh@941 -- # uname 00:23:46.534 16:06:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:46.534 16:06:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2535198 00:23:46.534 16:06:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:46.534 16:06:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:46.534 16:06:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2535198' 00:23:46.534 killing process with pid 2535198 00:23:46.534 16:06:26 -- common/autotest_common.sh@955 -- # kill 2535198 00:23:46.534 16:06:26 -- common/autotest_common.sh@960 -- # wait 2535198 00:23:49.070 16:06:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:49.070 16:06:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:49.070 16:06:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:49.070 16:06:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.070 16:06:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.070 16:06:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.070 16:06:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.070 16:06:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.606 16:06:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.606 00:23:51.606 real 0m26.910s 00:23:51.606 user 1m14.520s 00:23:51.606 sys 0m7.165s 00:23:51.606 16:06:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:51.606 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 ************************************ 00:23:51.606 END TEST nvmf_perf 00:23:51.606 ************************************ 00:23:51.606 16:06:30 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:51.606 16:06:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:51.606 16:06:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:51.606 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:23:51.606 ************************************ 00:23:51.606 START TEST nvmf_fio_host 00:23:51.606 ************************************ 00:23:51.606 16:06:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:51.606 * Looking for test storage... 00:23:51.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:51.606 16:06:31 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.606 16:06:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.606 16:06:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.606 16:06:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.606 16:06:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.607 16:06:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.607 16:06:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.607 16:06:31 -- paths/export.sh@5 -- # export PATH 00:23:51.607 16:06:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.607 16:06:31 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.607 16:06:31 -- nvmf/common.sh@7 -- # uname -s 00:23:51.607 16:06:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.607 16:06:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.607 16:06:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.607 16:06:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.607 16:06:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.607 16:06:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.607 16:06:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.607 16:06:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.607 16:06:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.607 16:06:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.607 16:06:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.607 16:06:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.607 16:06:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.607 16:06:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.607 16:06:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.607 16:06:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.607 16:06:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:51.607 16:06:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.607 16:06:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.607 16:06:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.607 16:06:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.607 16:06:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.607 16:06:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.607 16:06:31 -- paths/export.sh@5 -- # export PATH 00:23:51.607 16:06:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.607 16:06:31 -- nvmf/common.sh@47 -- # : 0 00:23:51.607 16:06:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.607 16:06:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.607 16:06:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.607 16:06:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.607 16:06:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.607 16:06:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.607 16:06:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.607 16:06:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.607 16:06:31 -- host/fio.sh@12 -- # nvmftestinit 00:23:51.607 16:06:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:51.607 16:06:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.607 16:06:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:51.607 16:06:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:51.607 16:06:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:51.607 16:06:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.607 16:06:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.607 16:06:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.607 16:06:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:51.607 16:06:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:51.607 16:06:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.607 16:06:31 -- common/autotest_common.sh@10 -- # set +x 00:23:56.879 16:06:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:56.879 16:06:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.879 16:06:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.879 16:06:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.879 16:06:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.879 16:06:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.879 16:06:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.879 16:06:35 -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.879 16:06:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.879 16:06:35 -- nvmf/common.sh@296 -- # e810=() 00:23:56.879 16:06:35 -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.879 16:06:35 -- nvmf/common.sh@297 -- # x722=() 00:23:56.879 16:06:35 -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.879 16:06:35 -- nvmf/common.sh@298 -- # mlx=() 00:23:56.879 16:06:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.879 16:06:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.879 16:06:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.879 16:06:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.879 16:06:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.879 16:06:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.879 16:06:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:56.879 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:56.879 16:06:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.879 16:06:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:56.879 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:56.879 16:06:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.879 16:06:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.879 16:06:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.879 16:06:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.879 16:06:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:56.879 16:06:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.879 16:06:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:56.879 Found net devices under 0000:86:00.0: cvl_0_0 00:23:56.879 16:06:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.879 16:06:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.879 16:06:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.879 16:06:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:56.879 16:06:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.879 16:06:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:56.880 Found net devices under 0000:86:00.1: cvl_0_1 00:23:56.880 16:06:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.880 16:06:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:56.880 16:06:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:56.880 16:06:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:56.880 16:06:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:56.880 16:06:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:56.880 16:06:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.880 16:06:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.880 16:06:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.880 16:06:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.880 16:06:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.880 16:06:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.880 16:06:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.880 16:06:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.880 16:06:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.880 16:06:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.880 16:06:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.880 16:06:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.880 16:06:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.880 16:06:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.880 16:06:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.880 16:06:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.880 16:06:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.880 16:06:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.880 16:06:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.880 16:06:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:23:56.880 00:23:56.880 --- 10.0.0.2 ping statistics --- 00:23:56.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.880 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:23:56.880 16:06:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:23:56.880 00:23:56.880 --- 10.0.0.1 ping statistics --- 00:23:56.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.880 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:23:56.880 16:06:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.880 16:06:35 -- nvmf/common.sh@411 -- # return 0 00:23:56.880 16:06:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:56.880 16:06:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.880 16:06:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:56.880 16:06:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:56.880 16:06:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.880 16:06:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:56.880 16:06:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:56.880 16:06:35 -- host/fio.sh@14 -- # [[ y != y ]] 00:23:56.880 16:06:35 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:23:56.880 16:06:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:56.880 16:06:35 -- common/autotest_common.sh@10 -- # set +x 00:23:56.880 16:06:36 -- host/fio.sh@22 -- # nvmfpid=2541778 00:23:56.880 16:06:36 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:56.880 16:06:36 -- host/fio.sh@26 -- # waitforlisten 2541778 00:23:56.880 16:06:36 -- common/autotest_common.sh@817 -- # '[' -z 2541778 ']' 00:23:56.880 16:06:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.880 16:06:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:56.880 16:06:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.880 16:06:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:56.880 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:56.880 16:06:36 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:56.880 [2024-04-26 16:06:36.082351] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:56.880 [2024-04-26 16:06:36.082436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.880 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.880 [2024-04-26 16:06:36.191190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.880 [2024-04-26 16:06:36.410945] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.880 [2024-04-26 16:06:36.410992] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.880 [2024-04-26 16:06:36.411002] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.880 [2024-04-26 16:06:36.411012] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.880 [2024-04-26 16:06:36.411020] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.880 [2024-04-26 16:06:36.411093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.880 [2024-04-26 16:06:36.411164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.880 [2024-04-26 16:06:36.411268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.880 [2024-04-26 16:06:36.411277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.470 16:06:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:57.470 16:06:36 -- common/autotest_common.sh@850 -- # return 0 00:23:57.470 16:06:36 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.470 16:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.470 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 [2024-04-26 16:06:36.857980] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.470 16:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.470 16:06:36 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:23:57.470 16:06:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:57.470 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 16:06:36 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:57.470 16:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.470 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:57.470 Malloc1 00:23:57.470 16:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.470 16:06:36 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.470 16:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.470 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:57.471 16:06:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.471 16:06:36 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:57.471 16:06:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.471 16:06:36 -- common/autotest_common.sh@10 -- # set +x 00:23:57.471 16:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.471 16:06:37 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.471 16:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.471 16:06:37 -- common/autotest_common.sh@10 -- # set +x 00:23:57.471 [2024-04-26 16:06:37.011327] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.471 16:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.471 16:06:37 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:57.471 16:06:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.471 16:06:37 -- common/autotest_common.sh@10 -- # set +x 00:23:57.471 16:06:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.471 16:06:37 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:57.471 16:06:37 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:57.471 16:06:37 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:57.471 16:06:37 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:57.471 16:06:37 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:57.471 16:06:37 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:57.471 16:06:37 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:57.471 16:06:37 -- common/autotest_common.sh@1327 -- # shift 00:23:57.471 16:06:37 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:57.471 16:06:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:57.471 16:06:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:57.471 16:06:37 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:57.471 16:06:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:57.471 16:06:37 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:57.471 16:06:37 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:57.471 16:06:37 -- common/autotest_common.sh@1333 -- # break 00:23:57.471 16:06:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:57.471 16:06:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:57.730 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:57.730 fio-3.35 00:23:57.730 Starting 1 thread 00:23:57.730 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.265 00:24:00.265 test: (groupid=0, jobs=1): err= 0: pid=2542223: Fri Apr 26 16:06:39 2024 00:24:00.265 read: IOPS=9588, BW=37.5MiB/s (39.3MB/s)(75.1MiB/2006msec) 00:24:00.265 slat (nsec): min=1849, max=275767, avg=2113.72, stdev=2831.67 00:24:00.265 clat (usec): min=4325, max=17374, avg=7551.48, stdev=1435.65 00:24:00.265 lat (usec): min=4327, max=17376, avg=7553.59, stdev=1435.71 00:24:00.265 clat percentiles (usec): 00:24:00.265 | 1.00th=[ 5145], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 6783], 00:24:00.265 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:24:00.265 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[10552], 00:24:00.265 | 99.00th=[13435], 99.50th=[14615], 99.90th=[16712], 99.95th=[17171], 00:24:00.265 | 99.99th=[17433] 00:24:00.265 bw ( KiB/s): min=37192, max=39128, per=99.89%, avg=38312.00, stdev=809.73, samples=4 00:24:00.265 iops : min= 9298, max= 9782, avg=9578.00, stdev=202.43, samples=4 00:24:00.265 write: IOPS=9592, BW=37.5MiB/s (39.3MB/s)(75.2MiB/2006msec); 0 zone resets 00:24:00.265 slat (nsec): min=1932, max=246578, avg=2204.79, stdev=2053.11 00:24:00.265 clat (usec): min=2689, max=11418, avg=5730.54, stdev=829.91 00:24:00.265 lat (usec): min=2691, max=11420, avg=5732.74, stdev=830.01 00:24:00.265 clat percentiles (usec): 00:24:00.265 | 1.00th=[ 3556], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5276], 00:24:00.265 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866], 00:24:00.265 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6456], 95.00th=[ 6915], 00:24:00.265 | 99.00th=[ 8586], 99.50th=[ 9241], 99.90th=[10552], 99.95th=[11207], 00:24:00.265 | 99.99th=[11338] 00:24:00.265 bw ( KiB/s): min=38008, max=38856, per=100.00%, avg=38388.00, stdev=352.88, samples=4 00:24:00.265 iops : min= 9502, max= 9714, avg=9597.00, stdev=88.22, samples=4 00:24:00.265 lat (msec) : 4=1.50%, 10=95.46%, 20=3.04% 00:24:00.265 cpu : usr=65.49%, sys=27.63%, ctx=49, majf=0, minf=1531 00:24:00.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:00.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:00.265 issued rwts: total=19234,19243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:00.265 00:24:00.265 Run status group 0 (all jobs): 00:24:00.265 READ: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.1MiB (78.8MB), run=2006-2006msec 00:24:00.265 WRITE: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.2MiB (78.8MB), run=2006-2006msec 00:24:00.524 ----------------------------------------------------- 00:24:00.524 Suppressions used: 00:24:00.524 count bytes template 00:24:00.524 1 57 /usr/src/fio/parse.c 00:24:00.524 1 8 libtcmalloc_minimal.so 00:24:00.524 ----------------------------------------------------- 00:24:00.524 00:24:00.524 16:06:40 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:00.524 16:06:40 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:00.524 16:06:40 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:00.524 16:06:40 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:00.524 16:06:40 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:00.524 16:06:40 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.524 16:06:40 -- common/autotest_common.sh@1327 -- # shift 00:24:00.524 16:06:40 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:00.524 16:06:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.524 16:06:40 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:00.524 16:06:40 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:00.524 16:06:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:00.524 16:06:40 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:00.524 16:06:40 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:00.524 16:06:40 -- common/autotest_common.sh@1333 -- # break 00:24:00.524 16:06:40 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:00.524 16:06:40 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:00.783 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:00.783 fio-3.35 00:24:00.783 Starting 1 thread 00:24:00.783 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.384 00:24:03.384 test: (groupid=0, jobs=1): err= 0: pid=2542790: Fri Apr 26 16:06:42 2024 00:24:03.384 read: IOPS=8503, BW=133MiB/s (139MB/s)(266MiB/2004msec) 00:24:03.384 slat (nsec): min=2923, max=92976, avg=3267.17, stdev=1241.82 00:24:03.384 clat (usec): min=449, max=35055, avg=9037.21, stdev=3164.65 00:24:03.384 lat (usec): min=458, max=35062, avg=9040.48, stdev=3165.04 00:24:03.384 clat percentiles (usec): 00:24:03.384 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6915], 00:24:03.384 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 8979], 00:24:03.384 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[11731], 95.00th=[13698], 00:24:03.384 | 99.00th=[23987], 99.50th=[25560], 99.90th=[26608], 99.95th=[27132], 00:24:03.384 | 99.99th=[32113] 00:24:03.384 bw ( KiB/s): min=63266, max=79552, per=51.49%, avg=70064.50, stdev=7372.85, samples=4 00:24:03.384 iops : min= 3954, max= 4972, avg=4379.00, stdev=460.84, samples=4 00:24:03.384 write: IOPS=4992, BW=78.0MiB/s (81.8MB/s)(143MiB/1827msec); 0 zone resets 00:24:03.384 slat (usec): min=31, max=282, avg=32.79, stdev= 5.34 00:24:03.384 clat (usec): min=3771, max=28872, avg=10427.63, stdev=2722.03 00:24:03.384 lat (usec): min=3803, max=28958, avg=10460.41, stdev=2724.32 00:24:03.384 clat percentiles (usec): 00:24:03.384 | 1.00th=[ 6783], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8717], 00:24:03.384 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10421], 00:24:03.384 | 70.00th=[10945], 80.00th=[11600], 90.00th=[12911], 95.00th=[13960], 00:24:03.384 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26084], 99.95th=[26608], 00:24:03.384 | 99.99th=[28967] 00:24:03.384 bw ( KiB/s): min=65147, max=83488, per=91.27%, avg=72910.75, stdev=8293.47, samples=4 00:24:03.384 iops : min= 4071, max= 5218, avg=4556.75, stdev=518.56, samples=4 00:24:03.384 lat (usec) : 500=0.01% 00:24:03.384 lat (msec) : 4=0.12%, 10=66.13%, 20=31.49%, 50=2.26% 00:24:03.384 cpu : usr=85.08%, sys=11.83%, ctx=20, majf=0, minf=2276 00:24:03.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:03.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.384 issued rwts: total=17042,9122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.384 00:24:03.384 Run status group 0 (all jobs): 00:24:03.384 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2004-2004msec 00:24:03.384 WRITE: bw=78.0MiB/s (81.8MB/s), 78.0MiB/s-78.0MiB/s (81.8MB/s-81.8MB/s), io=143MiB (149MB), run=1827-1827msec 00:24:03.642 ----------------------------------------------------- 00:24:03.642 Suppressions used: 00:24:03.642 count bytes template 00:24:03.642 1 57 /usr/src/fio/parse.c 00:24:03.642 846 81216 /usr/src/fio/iolog.c 00:24:03.642 1 8 libtcmalloc_minimal.so 00:24:03.642 ----------------------------------------------------- 00:24:03.642 00:24:03.642 16:06:43 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.642 16:06:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.642 16:06:43 -- common/autotest_common.sh@10 -- # set +x 00:24:03.642 16:06:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.642 16:06:43 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:03.642 16:06:43 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:03.642 16:06:43 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:03.642 16:06:43 -- host/fio.sh@84 -- # nvmftestfini 00:24:03.642 16:06:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:03.642 16:06:43 -- nvmf/common.sh@117 -- # sync 00:24:03.642 16:06:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.642 16:06:43 -- nvmf/common.sh@120 -- # set +e 00:24:03.642 16:06:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.642 16:06:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.642 rmmod nvme_tcp 00:24:03.642 rmmod nvme_fabrics 00:24:03.642 rmmod nvme_keyring 00:24:03.642 16:06:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.642 16:06:43 -- nvmf/common.sh@124 -- # set -e 00:24:03.642 16:06:43 -- nvmf/common.sh@125 -- # return 0 00:24:03.642 16:06:43 -- nvmf/common.sh@478 -- # '[' -n 2541778 ']' 00:24:03.642 16:06:43 -- nvmf/common.sh@479 -- # killprocess 2541778 00:24:03.642 16:06:43 -- common/autotest_common.sh@936 -- # '[' -z 2541778 ']' 00:24:03.642 16:06:43 -- common/autotest_common.sh@940 -- # kill -0 2541778 00:24:03.642 16:06:43 -- common/autotest_common.sh@941 -- # uname 00:24:03.642 16:06:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:03.642 16:06:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2541778 00:24:03.642 16:06:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:03.642 16:06:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:03.642 16:06:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2541778' 00:24:03.642 killing process with pid 2541778 00:24:03.642 16:06:43 -- common/autotest_common.sh@955 -- # kill 2541778 00:24:03.642 16:06:43 -- common/autotest_common.sh@960 -- # wait 2541778 00:24:05.545 16:06:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:05.545 16:06:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:05.545 16:06:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:05.545 16:06:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.545 16:06:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.545 16:06:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.545 16:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.545 16:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.450 16:06:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.450 00:24:07.450 real 0m15.814s 00:24:07.450 user 0m46.920s 00:24:07.450 sys 0m6.044s 00:24:07.450 16:06:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:07.450 16:06:46 -- common/autotest_common.sh@10 -- # set +x 00:24:07.450 ************************************ 00:24:07.450 END TEST nvmf_fio_host 00:24:07.450 ************************************ 00:24:07.450 16:06:46 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:07.450 16:06:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:07.450 16:06:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:07.450 16:06:46 -- common/autotest_common.sh@10 -- # set +x 00:24:07.450 ************************************ 00:24:07.450 START TEST nvmf_failover 00:24:07.450 ************************************ 00:24:07.450 16:06:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:07.450 * Looking for test storage... 00:24:07.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.450 16:06:47 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.450 16:06:47 -- nvmf/common.sh@7 -- # uname -s 00:24:07.450 16:06:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.450 16:06:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.450 16:06:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.450 16:06:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.450 16:06:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.450 16:06:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.450 16:06:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.450 16:06:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.450 16:06:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.450 16:06:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.450 16:06:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:07.450 16:06:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:07.450 16:06:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.450 16:06:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.450 16:06:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.450 16:06:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.450 16:06:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.450 16:06:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.450 16:06:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.450 16:06:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.450 16:06:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.450 16:06:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.450 16:06:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.450 16:06:47 -- paths/export.sh@5 -- # export PATH 00:24:07.450 16:06:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.450 16:06:47 -- nvmf/common.sh@47 -- # : 0 00:24:07.450 16:06:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.450 16:06:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.450 16:06:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.450 16:06:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.450 16:06:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.450 16:06:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.450 16:06:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.450 16:06:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.450 16:06:47 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:07.450 16:06:47 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:07.450 16:06:47 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:07.450 16:06:47 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.450 16:06:47 -- host/failover.sh@18 -- # nvmftestinit 00:24:07.450 16:06:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:07.451 16:06:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.451 16:06:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:07.451 16:06:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:07.451 16:06:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:07.451 16:06:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.451 16:06:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.451 16:06:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.451 16:06:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:07.451 16:06:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:07.451 16:06:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.451 16:06:47 -- common/autotest_common.sh@10 -- # set +x 00:24:12.722 16:06:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:12.722 16:06:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:12.722 16:06:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:12.722 16:06:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:12.722 16:06:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:12.722 16:06:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:12.722 16:06:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:12.722 16:06:52 -- nvmf/common.sh@295 -- # net_devs=() 00:24:12.722 16:06:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:12.722 16:06:52 -- nvmf/common.sh@296 -- # e810=() 00:24:12.722 16:06:52 -- nvmf/common.sh@296 -- # local -ga e810 00:24:12.722 16:06:52 -- nvmf/common.sh@297 -- # x722=() 00:24:12.722 16:06:52 -- nvmf/common.sh@297 -- # local -ga x722 00:24:12.722 16:06:52 -- nvmf/common.sh@298 -- # mlx=() 00:24:12.722 16:06:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:12.722 16:06:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.722 16:06:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:12.722 16:06:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:12.722 16:06:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:12.722 16:06:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.722 16:06:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:12.722 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:12.722 16:06:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.722 16:06:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:12.722 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:12.722 16:06:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:12.722 16:06:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.722 16:06:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.722 16:06:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:12.722 16:06:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.722 16:06:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:12.722 Found net devices under 0000:86:00.0: cvl_0_0 00:24:12.722 16:06:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.722 16:06:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.722 16:06:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.722 16:06:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:12.722 16:06:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.722 16:06:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:12.722 Found net devices under 0000:86:00.1: cvl_0_1 00:24:12.722 16:06:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.722 16:06:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:12.722 16:06:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:12.722 16:06:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:12.722 16:06:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.722 16:06:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.722 16:06:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.722 16:06:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:12.722 16:06:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.722 16:06:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.722 16:06:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:12.722 16:06:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.722 16:06:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.722 16:06:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:12.722 16:06:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:12.722 16:06:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.722 16:06:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.722 16:06:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.722 16:06:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.722 16:06:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:12.722 16:06:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.722 16:06:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.722 16:06:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.722 16:06:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:12.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:24:12.722 00:24:12.722 --- 10.0.0.2 ping statistics --- 00:24:12.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.722 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:24:12.722 16:06:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:24:12.722 00:24:12.722 --- 10.0.0.1 ping statistics --- 00:24:12.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.722 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:24:12.722 16:06:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.722 16:06:52 -- nvmf/common.sh@411 -- # return 0 00:24:12.722 16:06:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:12.722 16:06:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.722 16:06:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:12.722 16:06:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.722 16:06:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:12.722 16:06:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:12.722 16:06:52 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:12.722 16:06:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:12.722 16:06:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:12.722 16:06:52 -- common/autotest_common.sh@10 -- # set +x 00:24:12.722 16:06:52 -- nvmf/common.sh@470 -- # nvmfpid=2546782 00:24:12.722 16:06:52 -- nvmf/common.sh@471 -- # waitforlisten 2546782 00:24:12.722 16:06:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:12.722 16:06:52 -- common/autotest_common.sh@817 -- # '[' -z 2546782 ']' 00:24:12.722 16:06:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.722 16:06:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:12.722 16:06:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.722 16:06:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:12.722 16:06:52 -- common/autotest_common.sh@10 -- # set +x 00:24:12.982 [2024-04-26 16:06:52.408200] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:12.982 [2024-04-26 16:06:52.408280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.982 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.982 [2024-04-26 16:06:52.515868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:13.241 [2024-04-26 16:06:52.751290] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.241 [2024-04-26 16:06:52.751337] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.241 [2024-04-26 16:06:52.751347] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.241 [2024-04-26 16:06:52.751373] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.241 [2024-04-26 16:06:52.751386] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.241 [2024-04-26 16:06:52.751513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.241 [2024-04-26 16:06:52.751578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.241 [2024-04-26 16:06:52.751585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.817 16:06:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:13.817 16:06:53 -- common/autotest_common.sh@850 -- # return 0 00:24:13.817 16:06:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:13.817 16:06:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:13.817 16:06:53 -- common/autotest_common.sh@10 -- # set +x 00:24:13.817 16:06:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.817 16:06:53 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.817 [2024-04-26 16:06:53.380037] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.817 16:06:53 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:14.076 Malloc0 00:24:14.076 16:06:53 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.334 16:06:53 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:14.592 16:06:54 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.592 [2024-04-26 16:06:54.178009] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.592 16:06:54 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:14.850 [2024-04-26 16:06:54.354512] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:14.850 16:06:54 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:15.109 [2024-04-26 16:06:54.547145] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:15.109 16:06:54 -- host/failover.sh@31 -- # bdevperf_pid=2547220 00:24:15.109 16:06:54 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:15.109 16:06:54 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:15.109 16:06:54 -- host/failover.sh@34 -- # waitforlisten 2547220 /var/tmp/bdevperf.sock 00:24:15.109 16:06:54 -- common/autotest_common.sh@817 -- # '[' -z 2547220 ']' 00:24:15.109 16:06:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.109 16:06:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:15.109 16:06:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.109 16:06:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:15.109 16:06:54 -- common/autotest_common.sh@10 -- # set +x 00:24:16.045 16:06:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:16.045 16:06:55 -- common/autotest_common.sh@850 -- # return 0 00:24:16.045 16:06:55 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.304 NVMe0n1 00:24:16.304 16:06:55 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.562 00:24:16.562 16:06:56 -- host/failover.sh@39 -- # run_test_pid=2547481 00:24:16.562 16:06:56 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:16.562 16:06:56 -- host/failover.sh@41 -- # sleep 1 00:24:17.498 16:06:57 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.757 [2024-04-26 16:06:57.222616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222734] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.757 [2024-04-26 16:06:57.222805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.222994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 [2024-04-26 16:06:57.223096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:17.758 16:06:57 -- host/failover.sh@45 -- # sleep 3 00:24:21.045 16:07:00 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.045 00:24:21.045 16:07:00 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:21.045 [2024-04-26 16:07:00.693364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.045 [2024-04-26 16:07:00.693684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 [2024-04-26 16:07:00.693793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:21.046 16:07:00 -- host/failover.sh@50 -- # sleep 3 00:24:24.336 16:07:03 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.336 [2024-04-26 16:07:03.899533] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.336 16:07:03 -- host/failover.sh@55 -- # sleep 1 00:24:25.272 16:07:04 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:25.531 [2024-04-26 16:07:05.085495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.531 [2024-04-26 16:07:05.085907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.085996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 [2024-04-26 16:07:05.086132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:25.532 16:07:05 -- host/failover.sh@59 -- # wait 2547481 00:24:32.098 0 00:24:32.098 16:07:11 -- host/failover.sh@61 -- # killprocess 2547220 00:24:32.098 16:07:11 -- common/autotest_common.sh@936 -- # '[' -z 2547220 ']' 00:24:32.098 16:07:11 -- common/autotest_common.sh@940 -- # kill -0 2547220 00:24:32.098 16:07:11 -- common/autotest_common.sh@941 -- # uname 00:24:32.098 16:07:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:32.098 16:07:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2547220 00:24:32.098 16:07:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:32.098 16:07:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:32.098 16:07:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2547220' 00:24:32.098 killing process with pid 2547220 00:24:32.098 16:07:11 -- common/autotest_common.sh@955 -- # kill 2547220 00:24:32.098 16:07:11 -- common/autotest_common.sh@960 -- # wait 2547220 00:24:32.671 16:07:12 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:32.671 [2024-04-26 16:06:54.648798] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:32.671 [2024-04-26 16:06:54.648899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547220 ] 00:24:32.671 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.671 [2024-04-26 16:06:54.753295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.671 [2024-04-26 16:06:54.988695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.671 Running I/O for 15 seconds... 00:24:32.671 [2024-04-26 16:06:57.224111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.671 [2024-04-26 16:06:57.224155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.671 [2024-04-26 16:06:57.224181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.671 [2024-04-26 16:06:57.224192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.671 [2024-04-26 16:06:57.224205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.671 [2024-04-26 16:06:57.224215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.224664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.224684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.224982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.224992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.672 [2024-04-26 16:06:57.225485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.672 [2024-04-26 16:06:57.225661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.672 [2024-04-26 16:06:57.225671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.225987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.225996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:06:57.226760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.226770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007240 is same with the state(5) to be set 00:24:32.673 [2024-04-26 16:06:57.226783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.673 [2024-04-26 16:06:57.226791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.673 [2024-04-26 16:06:57.226801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86520 len:8 PRP1 0x0 PRP2 0x0 00:24:32.673 [2024-04-26 16:06:57.226810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.227101] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:24:32.673 [2024-04-26 16:06:57.227119] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:32.673 [2024-04-26 16:06:57.227151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.673 [2024-04-26 16:06:57.227162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.227173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.673 [2024-04-26 16:06:57.227182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.227193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.673 [2024-04-26 16:06:57.227201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.227211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.673 [2024-04-26 16:06:57.227220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:06:57.227228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.673 [2024-04-26 16:06:57.230372] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.673 [2024-04-26 16:06:57.230418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:24:32.673 [2024-04-26 16:06:57.352985] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.673 [2024-04-26 16:07:00.693974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.673 [2024-04-26 16:07:00.694018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:07:00.694038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.673 [2024-04-26 16:07:00.694051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:07:00.694062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.673 [2024-04-26 16:07:00.694077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:07:00.694088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.673 [2024-04-26 16:07:00.694097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:07:00.694106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004a40 is same with the state(5) to be set 00:24:32.673 [2024-04-26 16:07:00.694175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:07:00.694188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:07:00.694208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:07:00.694217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.673 [2024-04-26 16:07:00.694229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.673 [2024-04-26 16:07:00.694238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.694829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.694982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.694991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.674 [2024-04-26 16:07:00.695490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.674 [2024-04-26 16:07:00.695520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.674 [2024-04-26 16:07:00.695530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.695984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.695994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.675 [2024-04-26 16:07:00.696609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:00.696753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.696781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.675 [2024-04-26 16:07:00.696790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.675 [2024-04-26 16:07:00.696799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126128 len:8 PRP1 0x0 PRP2 0x0 00:24:32.675 [2024-04-26 16:07:00.696809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:00.697084] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008040 was disconnected and freed. reset controller. 00:24:32.675 [2024-04-26 16:07:00.697099] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:32.675 [2024-04-26 16:07:00.697109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.675 [2024-04-26 16:07:00.700228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.675 [2024-04-26 16:07:00.700270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:24:32.675 [2024-04-26 16:07:00.735800] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.675 [2024-04-26 16:07:05.086442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:05.086485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:05.086509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:05.086520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:05.086533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:05.086543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:05.086554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:05.086564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.675 [2024-04-26 16:07:05.086579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.675 [2024-04-26 16:07:05.086589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.086982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.086991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.676 [2024-04-26 16:07:05.087958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.676 [2024-04-26 16:07:05.087969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.087978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.087989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.088984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.088994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.089003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.089024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.089044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.089063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.089090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.677 [2024-04-26 16:07:05.089110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009240 is same with the state(5) to be set 00:24:32.677 [2024-04-26 16:07:05.089133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.677 [2024-04-26 16:07:05.089141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.677 [2024-04-26 16:07:05.089151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84080 len:8 PRP1 0x0 PRP2 0x0 00:24:32.677 [2024-04-26 16:07:05.089161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089431] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009240 was disconnected and freed. reset controller. 00:24:32.677 [2024-04-26 16:07:05.089444] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:32.677 [2024-04-26 16:07:05.089475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.677 [2024-04-26 16:07:05.089486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.677 [2024-04-26 16:07:05.089506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.677 [2024-04-26 16:07:05.089525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.677 [2024-04-26 16:07:05.089545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.677 [2024-04-26 16:07:05.089554] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.677 [2024-04-26 16:07:05.089585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:24:32.677 [2024-04-26 16:07:05.092721] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.677 [2024-04-26 16:07:05.129381] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.677 00:24:32.677 Latency(us) 00:24:32.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.677 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:32.677 Verification LBA range: start 0x0 length 0x4000 00:24:32.677 NVMe0n1 : 15.01 9647.37 37.69 543.54 0.00 12534.32 1225.24 20743.57 00:24:32.677 =================================================================================================================== 00:24:32.677 Total : 9647.37 37.69 543.54 0.00 12534.32 1225.24 20743.57 00:24:32.677 Received shutdown signal, test time was about 15.000000 seconds 00:24:32.677 00:24:32.677 Latency(us) 00:24:32.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.677 =================================================================================================================== 00:24:32.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.678 16:07:12 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:32.678 16:07:12 -- host/failover.sh@65 -- # count=3 00:24:32.678 16:07:12 -- host/failover.sh@67 -- # (( count != 3 )) 00:24:32.678 16:07:12 -- host/failover.sh@73 -- # bdevperf_pid=2550023 00:24:32.678 16:07:12 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:32.678 16:07:12 -- host/failover.sh@75 -- # waitforlisten 2550023 /var/tmp/bdevperf.sock 00:24:32.678 16:07:12 -- common/autotest_common.sh@817 -- # '[' -z 2550023 ']' 00:24:32.678 16:07:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.678 16:07:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:32.678 16:07:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.678 16:07:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:32.678 16:07:12 -- common/autotest_common.sh@10 -- # set +x 00:24:33.613 16:07:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:33.613 16:07:13 -- common/autotest_common.sh@850 -- # return 0 00:24:33.613 16:07:13 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.613 [2024-04-26 16:07:13.296296] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.871 16:07:13 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.871 [2024-04-26 16:07:13.476870] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:33.871 16:07:13 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.129 NVMe0n1 00:24:34.129 16:07:13 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.695 00:24:34.695 16:07:14 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.695 00:24:34.953 16:07:14 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.953 16:07:14 -- host/failover.sh@82 -- # grep -q NVMe0 00:24:34.953 16:07:14 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.211 16:07:14 -- host/failover.sh@87 -- # sleep 3 00:24:38.507 16:07:17 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.507 16:07:17 -- host/failover.sh@88 -- # grep -q NVMe0 00:24:38.507 16:07:17 -- host/failover.sh@90 -- # run_test_pid=2550953 00:24:38.507 16:07:17 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.507 16:07:17 -- host/failover.sh@92 -- # wait 2550953 00:24:39.445 0 00:24:39.445 16:07:19 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:39.445 [2024-04-26 16:07:12.353828] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:39.445 [2024-04-26 16:07:12.353921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550023 ] 00:24:39.445 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.445 [2024-04-26 16:07:12.458449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.445 [2024-04-26 16:07:12.690858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.445 [2024-04-26 16:07:14.738204] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:39.445 [2024-04-26 16:07:14.738273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.445 [2024-04-26 16:07:14.738290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.445 [2024-04-26 16:07:14.738303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.445 [2024-04-26 16:07:14.738314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.445 [2024-04-26 16:07:14.738325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.445 [2024-04-26 16:07:14.738335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.445 [2024-04-26 16:07:14.738351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.445 [2024-04-26 16:07:14.738361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.445 [2024-04-26 16:07:14.738370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.445 [2024-04-26 16:07:14.738422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.445 [2024-04-26 16:07:14.738448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:24:39.445 [2024-04-26 16:07:14.786868] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:39.445 Running I/O for 1 seconds... 00:24:39.445 00:24:39.445 Latency(us) 00:24:39.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.445 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:39.445 Verification LBA range: start 0x0 length 0x4000 00:24:39.445 NVMe0n1 : 1.01 9643.87 37.67 0.00 0.00 13214.72 2877.89 21883.33 00:24:39.445 =================================================================================================================== 00:24:39.445 Total : 9643.87 37.67 0.00 0.00 13214.72 2877.89 21883.33 00:24:39.445 16:07:19 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.445 16:07:19 -- host/failover.sh@95 -- # grep -q NVMe0 00:24:39.704 16:07:19 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:39.963 16:07:19 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.963 16:07:19 -- host/failover.sh@99 -- # grep -q NVMe0 00:24:39.963 16:07:19 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.221 16:07:19 -- host/failover.sh@101 -- # sleep 3 00:24:43.607 16:07:22 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.607 16:07:22 -- host/failover.sh@103 -- # grep -q NVMe0 00:24:43.607 16:07:23 -- host/failover.sh@108 -- # killprocess 2550023 00:24:43.607 16:07:23 -- common/autotest_common.sh@936 -- # '[' -z 2550023 ']' 00:24:43.607 16:07:23 -- common/autotest_common.sh@940 -- # kill -0 2550023 00:24:43.607 16:07:23 -- common/autotest_common.sh@941 -- # uname 00:24:43.607 16:07:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.607 16:07:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2550023 00:24:43.607 16:07:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:43.607 16:07:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:43.607 16:07:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2550023' 00:24:43.607 killing process with pid 2550023 00:24:43.607 16:07:23 -- common/autotest_common.sh@955 -- # kill 2550023 00:24:43.607 16:07:23 -- common/autotest_common.sh@960 -- # wait 2550023 00:24:44.542 16:07:24 -- host/failover.sh@110 -- # sync 00:24:44.542 16:07:24 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:44.801 16:07:24 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:44.801 16:07:24 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:44.801 16:07:24 -- host/failover.sh@116 -- # nvmftestfini 00:24:44.801 16:07:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:44.801 16:07:24 -- nvmf/common.sh@117 -- # sync 00:24:44.801 16:07:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:44.801 16:07:24 -- nvmf/common.sh@120 -- # set +e 00:24:44.801 16:07:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.801 16:07:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:44.801 rmmod nvme_tcp 00:24:44.801 rmmod nvme_fabrics 00:24:44.801 rmmod nvme_keyring 00:24:44.801 16:07:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.801 16:07:24 -- nvmf/common.sh@124 -- # set -e 00:24:44.801 16:07:24 -- nvmf/common.sh@125 -- # return 0 00:24:44.801 16:07:24 -- nvmf/common.sh@478 -- # '[' -n 2546782 ']' 00:24:44.801 16:07:24 -- nvmf/common.sh@479 -- # killprocess 2546782 00:24:44.801 16:07:24 -- common/autotest_common.sh@936 -- # '[' -z 2546782 ']' 00:24:44.801 16:07:24 -- common/autotest_common.sh@940 -- # kill -0 2546782 00:24:44.801 16:07:24 -- common/autotest_common.sh@941 -- # uname 00:24:44.801 16:07:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:44.801 16:07:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2546782 00:24:44.801 16:07:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:44.801 16:07:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:44.801 16:07:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2546782' 00:24:44.801 killing process with pid 2546782 00:24:44.801 16:07:24 -- common/autotest_common.sh@955 -- # kill 2546782 00:24:44.801 16:07:24 -- common/autotest_common.sh@960 -- # wait 2546782 00:24:46.706 16:07:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:46.706 16:07:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:46.706 16:07:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:46.706 16:07:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.706 16:07:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.706 16:07:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.706 16:07:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.706 16:07:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.612 16:07:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.612 00:24:48.612 real 0m40.975s 00:24:48.612 user 2m11.852s 00:24:48.612 sys 0m7.492s 00:24:48.612 16:07:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:48.612 16:07:27 -- common/autotest_common.sh@10 -- # set +x 00:24:48.612 ************************************ 00:24:48.612 END TEST nvmf_failover 00:24:48.612 ************************************ 00:24:48.612 16:07:27 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:48.612 16:07:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:48.612 16:07:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:48.612 16:07:27 -- common/autotest_common.sh@10 -- # set +x 00:24:48.612 ************************************ 00:24:48.612 START TEST nvmf_discovery 00:24:48.612 ************************************ 00:24:48.612 16:07:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:48.612 * Looking for test storage... 00:24:48.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.612 16:07:28 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.612 16:07:28 -- nvmf/common.sh@7 -- # uname -s 00:24:48.612 16:07:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.612 16:07:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.612 16:07:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.612 16:07:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.612 16:07:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.612 16:07:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.612 16:07:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.612 16:07:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.612 16:07:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.612 16:07:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.612 16:07:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:48.612 16:07:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:48.612 16:07:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.612 16:07:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.612 16:07:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.612 16:07:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.612 16:07:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.612 16:07:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.612 16:07:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.612 16:07:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.612 16:07:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.612 16:07:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.612 16:07:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.612 16:07:28 -- paths/export.sh@5 -- # export PATH 00:24:48.612 16:07:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.612 16:07:28 -- nvmf/common.sh@47 -- # : 0 00:24:48.612 16:07:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.612 16:07:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.612 16:07:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.612 16:07:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.612 16:07:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.612 16:07:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.612 16:07:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.612 16:07:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.612 16:07:28 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:48.612 16:07:28 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:48.612 16:07:28 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:48.612 16:07:28 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:48.612 16:07:28 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:48.612 16:07:28 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:48.612 16:07:28 -- host/discovery.sh@25 -- # nvmftestinit 00:24:48.612 16:07:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:48.612 16:07:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.612 16:07:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:48.612 16:07:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:48.612 16:07:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:48.612 16:07:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.612 16:07:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.612 16:07:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.612 16:07:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:48.612 16:07:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:48.612 16:07:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.612 16:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 16:07:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:53.883 16:07:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:53.883 16:07:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:53.883 16:07:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:53.883 16:07:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:53.883 16:07:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:53.883 16:07:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:53.883 16:07:33 -- nvmf/common.sh@295 -- # net_devs=() 00:24:53.883 16:07:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:53.883 16:07:33 -- nvmf/common.sh@296 -- # e810=() 00:24:53.883 16:07:33 -- nvmf/common.sh@296 -- # local -ga e810 00:24:53.883 16:07:33 -- nvmf/common.sh@297 -- # x722=() 00:24:53.883 16:07:33 -- nvmf/common.sh@297 -- # local -ga x722 00:24:53.883 16:07:33 -- nvmf/common.sh@298 -- # mlx=() 00:24:53.883 16:07:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:53.883 16:07:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.883 16:07:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:53.883 16:07:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:53.883 16:07:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:53.883 16:07:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.883 16:07:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:53.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:53.883 16:07:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.883 16:07:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:53.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:53.883 16:07:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:53.883 16:07:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:53.883 16:07:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.883 16:07:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.883 16:07:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:53.883 16:07:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.883 16:07:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:53.883 Found net devices under 0000:86:00.0: cvl_0_0 00:24:53.883 16:07:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.883 16:07:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.883 16:07:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.883 16:07:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:53.883 16:07:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.883 16:07:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:53.883 Found net devices under 0000:86:00.1: cvl_0_1 00:24:53.884 16:07:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.884 16:07:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:53.884 16:07:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:53.884 16:07:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:53.884 16:07:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:53.884 16:07:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:53.884 16:07:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.884 16:07:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.884 16:07:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.884 16:07:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:53.884 16:07:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.884 16:07:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.884 16:07:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:53.884 16:07:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.884 16:07:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.884 16:07:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:53.884 16:07:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:53.884 16:07:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.884 16:07:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.884 16:07:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.884 16:07:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.884 16:07:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:53.884 16:07:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.143 16:07:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.143 16:07:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.143 16:07:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:54.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:24:54.143 00:24:54.143 --- 10.0.0.2 ping statistics --- 00:24:54.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.143 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:24:54.143 16:07:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:24:54.143 00:24:54.143 --- 10.0.0.1 ping statistics --- 00:24:54.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.143 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:54.143 16:07:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.143 16:07:33 -- nvmf/common.sh@411 -- # return 0 00:24:54.143 16:07:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:54.143 16:07:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.143 16:07:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:54.143 16:07:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:54.143 16:07:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.143 16:07:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:54.143 16:07:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:54.143 16:07:33 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:54.143 16:07:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:54.143 16:07:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:54.143 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:24:54.143 16:07:33 -- nvmf/common.sh@470 -- # nvmfpid=2555633 00:24:54.143 16:07:33 -- nvmf/common.sh@471 -- # waitforlisten 2555633 00:24:54.143 16:07:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:54.143 16:07:33 -- common/autotest_common.sh@817 -- # '[' -z 2555633 ']' 00:24:54.143 16:07:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.143 16:07:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:54.143 16:07:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.143 16:07:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:54.143 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:24:54.143 [2024-04-26 16:07:33.728563] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:54.143 [2024-04-26 16:07:33.728655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.143 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.401 [2024-04-26 16:07:33.837134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.401 [2024-04-26 16:07:34.049998] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.401 [2024-04-26 16:07:34.050044] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.401 [2024-04-26 16:07:34.050054] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.401 [2024-04-26 16:07:34.050084] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.401 [2024-04-26 16:07:34.050095] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.401 [2024-04-26 16:07:34.050127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.968 16:07:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:54.969 16:07:34 -- common/autotest_common.sh@850 -- # return 0 00:24:54.969 16:07:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:54.969 16:07:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:54.969 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:24:54.969 16:07:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.969 16:07:34 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.969 16:07:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.969 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:24:54.969 [2024-04-26 16:07:34.537602] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.969 16:07:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.969 16:07:34 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:54.969 16:07:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.969 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:24:54.969 [2024-04-26 16:07:34.545739] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:54.969 16:07:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.969 16:07:34 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:54.969 16:07:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.969 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:24:54.969 null0 00:24:54.969 16:07:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.969 16:07:34 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:54.969 16:07:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.969 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:24:54.969 null1 00:24:54.969 16:07:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.969 16:07:34 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:54.969 16:07:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.969 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:24:54.969 16:07:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.969 16:07:34 -- host/discovery.sh@45 -- # hostpid=2555875 00:24:54.969 16:07:34 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:54.969 16:07:34 -- host/discovery.sh@46 -- # waitforlisten 2555875 /tmp/host.sock 00:24:54.969 16:07:34 -- common/autotest_common.sh@817 -- # '[' -z 2555875 ']' 00:24:54.969 16:07:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:24:54.969 16:07:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:54.969 16:07:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:54.969 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:54.969 16:07:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:54.969 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:24:54.969 [2024-04-26 16:07:34.648081] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:54.969 [2024-04-26 16:07:34.648164] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555875 ] 00:24:55.227 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.227 [2024-04-26 16:07:34.751479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.486 [2024-04-26 16:07:34.974787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.745 16:07:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:55.745 16:07:35 -- common/autotest_common.sh@850 -- # return 0 00:24:55.745 16:07:35 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.745 16:07:35 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:55.745 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.745 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:55.745 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.745 16:07:35 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:55.745 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.004 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.004 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.004 16:07:35 -- host/discovery.sh@72 -- # notify_id=0 00:24:56.004 16:07:35 -- host/discovery.sh@83 -- # get_subsystem_names 00:24:56.004 16:07:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.004 16:07:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.004 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.004 16:07:35 -- host/discovery.sh@59 -- # sort 00:24:56.004 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.004 16:07:35 -- host/discovery.sh@59 -- # xargs 00:24:56.004 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.004 16:07:35 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:56.004 16:07:35 -- host/discovery.sh@84 -- # get_bdev_list 00:24:56.004 16:07:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.004 16:07:35 -- host/discovery.sh@55 -- # xargs 00:24:56.004 16:07:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.004 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.004 16:07:35 -- host/discovery.sh@55 -- # sort 00:24:56.004 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.004 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.004 16:07:35 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:56.004 16:07:35 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:56.004 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.004 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.004 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.004 16:07:35 -- host/discovery.sh@87 -- # get_subsystem_names 00:24:56.004 16:07:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.004 16:07:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.004 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.004 16:07:35 -- host/discovery.sh@59 -- # sort 00:24:56.004 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.004 16:07:35 -- host/discovery.sh@59 -- # xargs 00:24:56.004 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.004 16:07:35 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:56.004 16:07:35 -- host/discovery.sh@88 -- # get_bdev_list 00:24:56.004 16:07:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.004 16:07:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.004 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.004 16:07:35 -- host/discovery.sh@55 -- # sort 00:24:56.004 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.004 16:07:35 -- host/discovery.sh@55 -- # xargs 00:24:56.004 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.004 16:07:35 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:56.004 16:07:35 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:56.004 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.004 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.004 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.005 16:07:35 -- host/discovery.sh@91 -- # get_subsystem_names 00:24:56.005 16:07:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.005 16:07:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.005 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.005 16:07:35 -- host/discovery.sh@59 -- # sort 00:24:56.005 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.005 16:07:35 -- host/discovery.sh@59 -- # xargs 00:24:56.005 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.005 16:07:35 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:56.005 16:07:35 -- host/discovery.sh@92 -- # get_bdev_list 00:24:56.005 16:07:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.005 16:07:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.005 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.005 16:07:35 -- host/discovery.sh@55 -- # sort 00:24:56.005 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.005 16:07:35 -- host/discovery.sh@55 -- # xargs 00:24:56.264 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.264 16:07:35 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:56.264 16:07:35 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:56.264 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.264 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.264 [2024-04-26 16:07:35.732949] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.264 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.264 16:07:35 -- host/discovery.sh@97 -- # get_subsystem_names 00:24:56.264 16:07:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.264 16:07:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.264 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.264 16:07:35 -- host/discovery.sh@59 -- # sort 00:24:56.264 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.264 16:07:35 -- host/discovery.sh@59 -- # xargs 00:24:56.264 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.264 16:07:35 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:56.264 16:07:35 -- host/discovery.sh@98 -- # get_bdev_list 00:24:56.264 16:07:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.264 16:07:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.264 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.264 16:07:35 -- host/discovery.sh@55 -- # sort 00:24:56.264 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.264 16:07:35 -- host/discovery.sh@55 -- # xargs 00:24:56.264 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.264 16:07:35 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:56.264 16:07:35 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:56.264 16:07:35 -- host/discovery.sh@79 -- # expected_count=0 00:24:56.264 16:07:35 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.264 16:07:35 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.264 16:07:35 -- common/autotest_common.sh@901 -- # local max=10 00:24:56.264 16:07:35 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:56.264 16:07:35 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.264 16:07:35 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:56.264 16:07:35 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:56.264 16:07:35 -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.264 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.264 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.264 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.264 16:07:35 -- host/discovery.sh@74 -- # notification_count=0 00:24:56.264 16:07:35 -- host/discovery.sh@75 -- # notify_id=0 00:24:56.264 16:07:35 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:56.264 16:07:35 -- common/autotest_common.sh@904 -- # return 0 00:24:56.264 16:07:35 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:56.264 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.264 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.264 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.264 16:07:35 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.264 16:07:35 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.264 16:07:35 -- common/autotest_common.sh@901 -- # local max=10 00:24:56.264 16:07:35 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:56.264 16:07:35 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:56.264 16:07:35 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:56.264 16:07:35 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.264 16:07:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.264 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.264 16:07:35 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.264 16:07:35 -- host/discovery.sh@59 -- # sort 00:24:56.264 16:07:35 -- host/discovery.sh@59 -- # xargs 00:24:56.264 16:07:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.264 16:07:35 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:24:56.264 16:07:35 -- common/autotest_common.sh@906 -- # sleep 1 00:24:56.831 [2024-04-26 16:07:36.489316] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:56.831 [2024-04-26 16:07:36.489344] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:56.831 [2024-04-26 16:07:36.489369] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:57.089 [2024-04-26 16:07:36.577653] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:57.089 [2024-04-26 16:07:36.679810] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:57.089 [2024-04-26 16:07:36.679837] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:57.348 16:07:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:57.348 16:07:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:57.348 16:07:36 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:57.348 16:07:36 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.348 16:07:36 -- host/discovery.sh@59 -- # xargs 00:24:57.348 16:07:36 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.348 16:07:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.348 16:07:36 -- host/discovery.sh@59 -- # sort 00:24:57.348 16:07:36 -- common/autotest_common.sh@10 -- # set +x 00:24:57.348 16:07:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.348 16:07:36 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.348 16:07:36 -- common/autotest_common.sh@904 -- # return 0 00:24:57.348 16:07:36 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:57.348 16:07:36 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:57.348 16:07:36 -- common/autotest_common.sh@901 -- # local max=10 00:24:57.348 16:07:36 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:57.348 16:07:36 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:57.348 16:07:36 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:57.348 16:07:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.348 16:07:37 -- host/discovery.sh@55 -- # xargs 00:24:57.348 16:07:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.348 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.348 16:07:37 -- host/discovery.sh@55 -- # sort 00:24:57.348 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:57.348 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:57.608 16:07:37 -- common/autotest_common.sh@904 -- # return 0 00:24:57.608 16:07:37 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:57.608 16:07:37 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:57.608 16:07:37 -- common/autotest_common.sh@901 -- # local max=10 00:24:57.608 16:07:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:57.608 16:07:37 -- host/discovery.sh@63 -- # xargs 00:24:57.608 16:07:37 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:57.608 16:07:37 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:57.608 16:07:37 -- host/discovery.sh@63 -- # sort -n 00:24:57.608 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.608 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:57.608 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:24:57.608 16:07:37 -- common/autotest_common.sh@904 -- # return 0 00:24:57.608 16:07:37 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:57.608 16:07:37 -- host/discovery.sh@79 -- # expected_count=1 00:24:57.608 16:07:37 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:57.608 16:07:37 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:57.608 16:07:37 -- common/autotest_common.sh@901 -- # local max=10 00:24:57.608 16:07:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:57.608 16:07:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:57.608 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.608 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:57.608 16:07:37 -- host/discovery.sh@74 -- # jq '. | length' 00:24:57.608 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.608 16:07:37 -- host/discovery.sh@74 -- # notification_count=1 00:24:57.608 16:07:37 -- host/discovery.sh@75 -- # notify_id=1 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:57.608 16:07:37 -- common/autotest_common.sh@904 -- # return 0 00:24:57.608 16:07:37 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:57.608 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.608 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:57.608 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.608 16:07:37 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:57.608 16:07:37 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:57.608 16:07:37 -- common/autotest_common.sh@901 -- # local max=10 00:24:57.608 16:07:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:57.608 16:07:37 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.608 16:07:37 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.608 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.608 16:07:37 -- host/discovery.sh@55 -- # sort 00:24:57.608 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:57.608 16:07:37 -- host/discovery.sh@55 -- # xargs 00:24:57.608 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:57.608 16:07:37 -- common/autotest_common.sh@904 -- # return 0 00:24:57.608 16:07:37 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:57.608 16:07:37 -- host/discovery.sh@79 -- # expected_count=1 00:24:57.608 16:07:37 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:57.608 16:07:37 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:57.608 16:07:37 -- common/autotest_common.sh@901 -- # local max=10 00:24:57.608 16:07:37 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:57.608 16:07:37 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:57.608 16:07:37 -- host/discovery.sh@74 -- # jq '. | length' 00:24:57.608 16:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.608 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:57.608 16:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.608 16:07:37 -- host/discovery.sh@74 -- # notification_count=0 00:24:57.608 16:07:37 -- host/discovery.sh@75 -- # notify_id=1 00:24:57.608 16:07:37 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:57.608 16:07:37 -- common/autotest_common.sh@906 -- # sleep 1 00:24:58.985 16:07:38 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:58.985 16:07:38 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:58.985 16:07:38 -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.985 16:07:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.985 16:07:38 -- common/autotest_common.sh@10 -- # set +x 00:24:58.985 16:07:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.985 16:07:38 -- host/discovery.sh@74 -- # notification_count=1 00:24:58.985 16:07:38 -- host/discovery.sh@75 -- # notify_id=2 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:58.985 16:07:38 -- common/autotest_common.sh@904 -- # return 0 00:24:58.985 16:07:38 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:58.985 16:07:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.985 16:07:38 -- common/autotest_common.sh@10 -- # set +x 00:24:58.985 [2024-04-26 16:07:38.337572] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.985 [2024-04-26 16:07:38.337942] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:58.985 [2024-04-26 16:07:38.337986] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:58.985 16:07:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.985 16:07:38 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@901 -- # local max=10 00:24:58.985 16:07:38 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:58.985 16:07:38 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:58.985 16:07:38 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:58.985 16:07:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.985 16:07:38 -- host/discovery.sh@59 -- # sort 00:24:58.985 16:07:38 -- common/autotest_common.sh@10 -- # set +x 00:24:58.985 16:07:38 -- host/discovery.sh@59 -- # xargs 00:24:58.985 16:07:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.985 16:07:38 -- common/autotest_common.sh@904 -- # return 0 00:24:58.985 16:07:38 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@901 -- # local max=10 00:24:58.985 16:07:38 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:58.985 16:07:38 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.985 16:07:38 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.985 16:07:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.985 16:07:38 -- host/discovery.sh@55 -- # sort 00:24:58.985 16:07:38 -- common/autotest_common.sh@10 -- # set +x 00:24:58.985 16:07:38 -- host/discovery.sh@55 -- # xargs 00:24:58.985 16:07:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.985 [2024-04-26 16:07:38.425651] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:58.985 16:07:38 -- common/autotest_common.sh@904 -- # return 0 00:24:58.985 16:07:38 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@901 -- # local max=10 00:24:58.985 16:07:38 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:58.985 16:07:38 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:58.985 16:07:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.985 16:07:38 -- common/autotest_common.sh@10 -- # set +x 00:24:58.985 16:07:38 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:58.985 16:07:38 -- host/discovery.sh@63 -- # sort -n 00:24:58.985 16:07:38 -- host/discovery.sh@63 -- # xargs 00:24:58.985 16:07:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.985 16:07:38 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:58.985 16:07:38 -- common/autotest_common.sh@906 -- # sleep 1 00:24:59.243 [2024-04-26 16:07:38.729266] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:59.243 [2024-04-26 16:07:38.729295] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:59.243 [2024-04-26 16:07:38.729304] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:59.814 16:07:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:59.814 16:07:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:59.814 16:07:39 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:59.814 16:07:39 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:59.814 16:07:39 -- host/discovery.sh@63 -- # xargs 00:24:59.814 16:07:39 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:59.814 16:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.814 16:07:39 -- host/discovery.sh@63 -- # sort -n 00:24:59.814 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:00.072 16:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.072 16:07:39 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:00.072 16:07:39 -- common/autotest_common.sh@904 -- # return 0 00:25:00.072 16:07:39 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:00.072 16:07:39 -- host/discovery.sh@79 -- # expected_count=0 00:25:00.072 16:07:39 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:00.072 16:07:39 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:00.072 16:07:39 -- common/autotest_common.sh@901 -- # local max=10 00:25:00.072 16:07:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:00.072 16:07:39 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:00.072 16:07:39 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:00.072 16:07:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:00.072 16:07:39 -- host/discovery.sh@74 -- # jq '. | length' 00:25:00.072 16:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.072 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:00.072 16:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.072 16:07:39 -- host/discovery.sh@74 -- # notification_count=0 00:25:00.072 16:07:39 -- host/discovery.sh@75 -- # notify_id=2 00:25:00.072 16:07:39 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:00.072 16:07:39 -- common/autotest_common.sh@904 -- # return 0 00:25:00.073 16:07:39 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.073 16:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.073 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:00.073 [2024-04-26 16:07:39.581729] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:00.073 [2024-04-26 16:07:39.581761] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:00.073 16:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.073 16:07:39 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:00.073 16:07:39 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:00.073 16:07:39 -- common/autotest_common.sh@901 -- # local max=10 00:25:00.073 16:07:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:00.073 16:07:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:00.073 16:07:39 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:00.073 16:07:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:00.073 [2024-04-26 16:07:39.590761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.073 [2024-04-26 16:07:39.590794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.073 [2024-04-26 16:07:39.590809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.073 [2024-04-26 16:07:39.590819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.073 [2024-04-26 16:07:39.590829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.073 [2024-04-26 16:07:39.590840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.073 [2024-04-26 16:07:39.590850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:00.073 [2024-04-26 16:07:39.590860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:00.073 [2024-04-26 16:07:39.590870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.073 16:07:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:00.073 16:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.073 16:07:39 -- host/discovery.sh@59 -- # sort 00:25:00.073 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:00.073 16:07:39 -- host/discovery.sh@59 -- # xargs 00:25:00.073 [2024-04-26 16:07:39.600778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.073 16:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.073 [2024-04-26 16:07:39.610824] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.073 [2024-04-26 16:07:39.611322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-04-26 16:07:39.611705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-04-26 16:07:39.611721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.073 [2024-04-26 16:07:39.611733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.073 [2024-04-26 16:07:39.611749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.073 [2024-04-26 16:07:39.611780] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.073 [2024-04-26 16:07:39.611791] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.073 [2024-04-26 16:07:39.611805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.073 [2024-04-26 16:07:39.611821] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.073 [2024-04-26 16:07:39.620902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.073 [2024-04-26 16:07:39.621468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-04-26 16:07:39.621814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-04-26 16:07:39.621828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.073 [2024-04-26 16:07:39.621839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.073 [2024-04-26 16:07:39.621854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.073 [2024-04-26 16:07:39.621877] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.073 [2024-04-26 16:07:39.621887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.073 [2024-04-26 16:07:39.621897] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.073 [2024-04-26 16:07:39.621919] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.073 16:07:39 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.073 16:07:39 -- common/autotest_common.sh@904 -- # return 0 00:25:00.073 16:07:39 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:00.073 16:07:39 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:00.073 16:07:39 -- common/autotest_common.sh@901 -- # local max=10 00:25:00.073 16:07:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:00.073 16:07:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:00.073 16:07:39 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:00.073 16:07:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.073 16:07:39 -- host/discovery.sh@55 -- # xargs 00:25:00.073 16:07:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.073 16:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.073 16:07:39 -- host/discovery.sh@55 -- # sort 00:25:00.073 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:00.073 [2024-04-26 16:07:39.630973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.073 [2024-04-26 16:07:39.631490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-04-26 16:07:39.631864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-04-26 16:07:39.631883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.073 [2024-04-26 16:07:39.631894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.074 [2024-04-26 16:07:39.631909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.074 [2024-04-26 16:07:39.631938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.074 [2024-04-26 16:07:39.631948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.074 [2024-04-26 16:07:39.631957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.074 [2024-04-26 16:07:39.631979] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.074 [2024-04-26 16:07:39.641054] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.074 [2024-04-26 16:07:39.641472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-04-26 16:07:39.641811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-04-26 16:07:39.641825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.074 [2024-04-26 16:07:39.641835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.074 [2024-04-26 16:07:39.641850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.074 [2024-04-26 16:07:39.641874] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.074 [2024-04-26 16:07:39.641883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.074 [2024-04-26 16:07:39.641892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.074 [2024-04-26 16:07:39.641907] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.074 [2024-04-26 16:07:39.651129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.074 [2024-04-26 16:07:39.651602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-04-26 16:07:39.651926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-04-26 16:07:39.651940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.074 [2024-04-26 16:07:39.651950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.074 [2024-04-26 16:07:39.651965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.074 [2024-04-26 16:07:39.652000] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.074 [2024-04-26 16:07:39.652010] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.074 [2024-04-26 16:07:39.652019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.074 [2024-04-26 16:07:39.652032] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.074 [2024-04-26 16:07:39.661204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.074 [2024-04-26 16:07:39.661655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-04-26 16:07:39.662051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-04-26 16:07:39.662064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.074 [2024-04-26 16:07:39.662083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.074 [2024-04-26 16:07:39.662098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.074 [2024-04-26 16:07:39.662127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.074 [2024-04-26 16:07:39.662136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.074 [2024-04-26 16:07:39.662145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.074 [2024-04-26 16:07:39.662159] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.074 16:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.074 [2024-04-26 16:07:39.671270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.074 [2024-04-26 16:07:39.671653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-04-26 16:07:39.671992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-04-26 16:07:39.672007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.074 [2024-04-26 16:07:39.672017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.074 [2024-04-26 16:07:39.672031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.074 [2024-04-26 16:07:39.672052] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.074 [2024-04-26 16:07:39.672062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.074 [2024-04-26 16:07:39.672079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.074 [2024-04-26 16:07:39.672093] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.074 16:07:39 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:00.074 16:07:39 -- common/autotest_common.sh@904 -- # return 0 00:25:00.074 16:07:39 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:00.074 16:07:39 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:00.074 16:07:39 -- common/autotest_common.sh@901 -- # local max=10 00:25:00.074 16:07:39 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:00.074 16:07:39 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:00.074 16:07:39 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:00.074 16:07:39 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:00.074 16:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.074 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:00.074 16:07:39 -- host/discovery.sh@63 -- # xargs 00:25:00.074 16:07:39 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:00.074 16:07:39 -- host/discovery.sh@63 -- # sort -n 00:25:00.075 [2024-04-26 16:07:39.681349] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.075 [2024-04-26 16:07:39.681607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-04-26 16:07:39.681958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-04-26 16:07:39.681971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.075 [2024-04-26 16:07:39.681980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.075 [2024-04-26 16:07:39.681995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.075 [2024-04-26 16:07:39.682010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.075 [2024-04-26 16:07:39.682018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.075 [2024-04-26 16:07:39.682026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.075 [2024-04-26 16:07:39.682039] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.075 [2024-04-26 16:07:39.691424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.075 [2024-04-26 16:07:39.691812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-04-26 16:07:39.692163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-04-26 16:07:39.692178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.075 [2024-04-26 16:07:39.692188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.075 [2024-04-26 16:07:39.692203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.075 [2024-04-26 16:07:39.692232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.075 [2024-04-26 16:07:39.692241] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.075 [2024-04-26 16:07:39.692251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.075 [2024-04-26 16:07:39.692264] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.075 16:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.075 [2024-04-26 16:07:39.701493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.075 [2024-04-26 16:07:39.701891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-04-26 16:07:39.702190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-04-26 16:07:39.702204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:25:00.075 [2024-04-26 16:07:39.702214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:25:00.075 [2024-04-26 16:07:39.702229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:25:00.075 [2024-04-26 16:07:39.702251] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.075 [2024-04-26 16:07:39.702260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.075 [2024-04-26 16:07:39.702269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.075 [2024-04-26 16:07:39.702283] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.075 [2024-04-26 16:07:39.710377] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:00.075 [2024-04-26 16:07:39.710405] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:00.075 16:07:39 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:00.075 16:07:39 -- common/autotest_common.sh@906 -- # sleep 1 00:25:01.450 16:07:40 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:01.450 16:07:40 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:01.451 16:07:40 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:01.451 16:07:40 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:01.451 16:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.451 16:07:40 -- host/discovery.sh@63 -- # sort -n 00:25:01.451 16:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.451 16:07:40 -- host/discovery.sh@63 -- # xargs 00:25:01.451 16:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:25:01.451 16:07:40 -- common/autotest_common.sh@904 -- # return 0 00:25:01.451 16:07:40 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:01.451 16:07:40 -- host/discovery.sh@79 -- # expected_count=0 00:25:01.451 16:07:40 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:01.451 16:07:40 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:01.451 16:07:40 -- common/autotest_common.sh@901 -- # local max=10 00:25:01.451 16:07:40 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:01.451 16:07:40 -- host/discovery.sh@74 -- # jq '. | length' 00:25:01.451 16:07:40 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:01.451 16:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.451 16:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.451 16:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.451 16:07:40 -- host/discovery.sh@74 -- # notification_count=0 00:25:01.451 16:07:40 -- host/discovery.sh@75 -- # notify_id=2 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:01.451 16:07:40 -- common/autotest_common.sh@904 -- # return 0 00:25:01.451 16:07:40 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:01.451 16:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.451 16:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.451 16:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.451 16:07:40 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:01.451 16:07:40 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:01.451 16:07:40 -- common/autotest_common.sh@901 -- # local max=10 00:25:01.451 16:07:40 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:01.451 16:07:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.451 16:07:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.451 16:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.451 16:07:40 -- host/discovery.sh@59 -- # sort 00:25:01.451 16:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.451 16:07:40 -- host/discovery.sh@59 -- # xargs 00:25:01.451 16:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:25:01.451 16:07:40 -- common/autotest_common.sh@904 -- # return 0 00:25:01.451 16:07:40 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:01.451 16:07:40 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:01.451 16:07:40 -- common/autotest_common.sh@901 -- # local max=10 00:25:01.451 16:07:40 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:01.451 16:07:40 -- host/discovery.sh@55 -- # sort 00:25:01.451 16:07:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.451 16:07:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.451 16:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.451 16:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.451 16:07:40 -- host/discovery.sh@55 -- # xargs 00:25:01.451 16:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:25:01.451 16:07:40 -- common/autotest_common.sh@904 -- # return 0 00:25:01.451 16:07:40 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:01.451 16:07:40 -- host/discovery.sh@79 -- # expected_count=2 00:25:01.451 16:07:40 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:01.451 16:07:40 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:01.451 16:07:40 -- common/autotest_common.sh@901 -- # local max=10 00:25:01.451 16:07:40 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:01.451 16:07:40 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:01.451 16:07:40 -- host/discovery.sh@74 -- # jq '. | length' 00:25:01.451 16:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.451 16:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:01.451 16:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.451 16:07:40 -- host/discovery.sh@74 -- # notification_count=2 00:25:01.451 16:07:40 -- host/discovery.sh@75 -- # notify_id=4 00:25:01.451 16:07:40 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:01.451 16:07:40 -- common/autotest_common.sh@904 -- # return 0 00:25:01.451 16:07:40 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:01.451 16:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.451 16:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:02.389 [2024-04-26 16:07:41.992250] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:02.389 [2024-04-26 16:07:41.992284] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:02.389 [2024-04-26 16:07:41.992307] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:02.647 [2024-04-26 16:07:42.078583] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:02.906 [2024-04-26 16:07:42.350478] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:02.907 [2024-04-26 16:07:42.350514] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:02.907 16:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.907 16:07:42 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.907 16:07:42 -- common/autotest_common.sh@638 -- # local es=0 00:25:02.907 16:07:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.907 16:07:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:02.907 16:07:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:02.907 16:07:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:02.907 16:07:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:02.907 16:07:42 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.907 16:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.907 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:02.907 request: 00:25:02.907 { 00:25:02.907 "name": "nvme", 00:25:02.907 "trtype": "tcp", 00:25:02.907 "traddr": "10.0.0.2", 00:25:02.907 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:02.907 "adrfam": "ipv4", 00:25:02.907 "trsvcid": "8009", 00:25:02.907 "wait_for_attach": true, 00:25:02.907 "method": "bdev_nvme_start_discovery", 00:25:02.907 "req_id": 1 00:25:02.907 } 00:25:02.907 Got JSON-RPC error response 00:25:02.907 response: 00:25:02.907 { 00:25:02.907 "code": -17, 00:25:02.907 "message": "File exists" 00:25:02.907 } 00:25:02.907 16:07:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:02.907 16:07:42 -- common/autotest_common.sh@641 -- # es=1 00:25:02.907 16:07:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:02.907 16:07:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:02.907 16:07:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:02.907 16:07:42 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:02.907 16:07:42 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:02.907 16:07:42 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:02.907 16:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.907 16:07:42 -- host/discovery.sh@67 -- # sort 00:25:02.907 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:02.907 16:07:42 -- host/discovery.sh@67 -- # xargs 00:25:02.907 16:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.907 16:07:42 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:02.907 16:07:42 -- host/discovery.sh@146 -- # get_bdev_list 00:25:02.907 16:07:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.907 16:07:42 -- host/discovery.sh@55 -- # xargs 00:25:02.907 16:07:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.907 16:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.907 16:07:42 -- host/discovery.sh@55 -- # sort 00:25:02.907 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:02.907 16:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.907 16:07:42 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:02.907 16:07:42 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.907 16:07:42 -- common/autotest_common.sh@638 -- # local es=0 00:25:02.907 16:07:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.907 16:07:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:02.907 16:07:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:02.907 16:07:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:02.907 16:07:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:02.907 16:07:42 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:02.907 16:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.907 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:02.907 request: 00:25:02.907 { 00:25:02.907 "name": "nvme_second", 00:25:02.907 "trtype": "tcp", 00:25:02.907 "traddr": "10.0.0.2", 00:25:02.907 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:02.907 "adrfam": "ipv4", 00:25:02.907 "trsvcid": "8009", 00:25:02.907 "wait_for_attach": true, 00:25:02.907 "method": "bdev_nvme_start_discovery", 00:25:02.907 "req_id": 1 00:25:02.907 } 00:25:02.907 Got JSON-RPC error response 00:25:02.907 response: 00:25:02.907 { 00:25:02.907 "code": -17, 00:25:02.907 "message": "File exists" 00:25:02.907 } 00:25:02.907 16:07:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:02.907 16:07:42 -- common/autotest_common.sh@641 -- # es=1 00:25:02.907 16:07:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:02.907 16:07:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:02.907 16:07:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:02.907 16:07:42 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:02.907 16:07:42 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:02.907 16:07:42 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:02.907 16:07:42 -- host/discovery.sh@67 -- # xargs 00:25:02.907 16:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.907 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:02.907 16:07:42 -- host/discovery.sh@67 -- # sort 00:25:02.907 16:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.907 16:07:42 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:02.907 16:07:42 -- host/discovery.sh@152 -- # get_bdev_list 00:25:02.907 16:07:42 -- host/discovery.sh@55 -- # sort 00:25:02.907 16:07:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.907 16:07:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.907 16:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.907 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:02.907 16:07:42 -- host/discovery.sh@55 -- # xargs 00:25:02.907 16:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.907 16:07:42 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:03.166 16:07:42 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:03.166 16:07:42 -- common/autotest_common.sh@638 -- # local es=0 00:25:03.166 16:07:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:03.166 16:07:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:03.166 16:07:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:03.166 16:07:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:03.166 16:07:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:03.166 16:07:42 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:03.166 16:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.166 16:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:04.101 [2024-04-26 16:07:43.602360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.101 [2024-04-26 16:07:43.602736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.101 [2024-04-26 16:07:43.602753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000011840 with addr=10.0.0.2, port=8010 00:25:04.101 [2024-04-26 16:07:43.602802] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:04.101 [2024-04-26 16:07:43.602812] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:04.101 [2024-04-26 16:07:43.602831] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:05.037 [2024-04-26 16:07:44.604817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.037 [2024-04-26 16:07:44.605232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.037 [2024-04-26 16:07:44.605249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000011a40 with addr=10.0.0.2, port=8010 00:25:05.037 [2024-04-26 16:07:44.605295] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:05.037 [2024-04-26 16:07:44.605304] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:05.037 [2024-04-26 16:07:44.605314] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:05.973 [2024-04-26 16:07:45.606711] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:05.973 request: 00:25:05.973 { 00:25:05.973 "name": "nvme_second", 00:25:05.973 "trtype": "tcp", 00:25:05.973 "traddr": "10.0.0.2", 00:25:05.973 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:05.973 "adrfam": "ipv4", 00:25:05.973 "trsvcid": "8010", 00:25:05.973 "attach_timeout_ms": 3000, 00:25:05.973 "method": "bdev_nvme_start_discovery", 00:25:05.973 "req_id": 1 00:25:05.973 } 00:25:05.973 Got JSON-RPC error response 00:25:05.973 response: 00:25:05.973 { 00:25:05.973 "code": -110, 00:25:05.973 "message": "Connection timed out" 00:25:05.973 } 00:25:05.973 16:07:45 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:05.973 16:07:45 -- common/autotest_common.sh@641 -- # es=1 00:25:05.973 16:07:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:05.973 16:07:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:05.973 16:07:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:05.973 16:07:45 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:05.973 16:07:45 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:05.973 16:07:45 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:05.973 16:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:05.973 16:07:45 -- host/discovery.sh@67 -- # sort 00:25:05.973 16:07:45 -- common/autotest_common.sh@10 -- # set +x 00:25:05.973 16:07:45 -- host/discovery.sh@67 -- # xargs 00:25:05.973 16:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.233 16:07:45 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:06.233 16:07:45 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:06.233 16:07:45 -- host/discovery.sh@161 -- # kill 2555875 00:25:06.233 16:07:45 -- host/discovery.sh@162 -- # nvmftestfini 00:25:06.233 16:07:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:06.233 16:07:45 -- nvmf/common.sh@117 -- # sync 00:25:06.233 16:07:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:06.233 16:07:45 -- nvmf/common.sh@120 -- # set +e 00:25:06.233 16:07:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:06.233 16:07:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:06.233 rmmod nvme_tcp 00:25:06.233 rmmod nvme_fabrics 00:25:06.233 rmmod nvme_keyring 00:25:06.233 16:07:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:06.233 16:07:45 -- nvmf/common.sh@124 -- # set -e 00:25:06.233 16:07:45 -- nvmf/common.sh@125 -- # return 0 00:25:06.233 16:07:45 -- nvmf/common.sh@478 -- # '[' -n 2555633 ']' 00:25:06.233 16:07:45 -- nvmf/common.sh@479 -- # killprocess 2555633 00:25:06.233 16:07:45 -- common/autotest_common.sh@936 -- # '[' -z 2555633 ']' 00:25:06.233 16:07:45 -- common/autotest_common.sh@940 -- # kill -0 2555633 00:25:06.233 16:07:45 -- common/autotest_common.sh@941 -- # uname 00:25:06.233 16:07:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:06.233 16:07:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2555633 00:25:06.233 16:07:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:06.233 16:07:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:06.233 16:07:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2555633' 00:25:06.233 killing process with pid 2555633 00:25:06.233 16:07:45 -- common/autotest_common.sh@955 -- # kill 2555633 00:25:06.233 16:07:45 -- common/autotest_common.sh@960 -- # wait 2555633 00:25:07.611 16:07:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:07.611 16:07:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:07.611 16:07:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:07.611 16:07:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.611 16:07:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:07.611 16:07:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.611 16:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.611 16:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.519 16:07:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:09.519 00:25:09.519 real 0m21.039s 00:25:09.519 user 0m28.368s 00:25:09.519 sys 0m5.706s 00:25:09.519 16:07:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:09.519 16:07:49 -- common/autotest_common.sh@10 -- # set +x 00:25:09.519 ************************************ 00:25:09.519 END TEST nvmf_discovery 00:25:09.519 ************************************ 00:25:09.519 16:07:49 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.519 16:07:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:09.519 16:07:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.519 16:07:49 -- common/autotest_common.sh@10 -- # set +x 00:25:09.778 ************************************ 00:25:09.778 START TEST nvmf_discovery_remove_ifc 00:25:09.778 ************************************ 00:25:09.778 16:07:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.778 * Looking for test storage... 00:25:09.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.778 16:07:49 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.778 16:07:49 -- nvmf/common.sh@7 -- # uname -s 00:25:09.778 16:07:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.778 16:07:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.778 16:07:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.778 16:07:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.778 16:07:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.778 16:07:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.778 16:07:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.778 16:07:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.778 16:07:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.778 16:07:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.778 16:07:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.778 16:07:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.778 16:07:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.778 16:07:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.778 16:07:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.778 16:07:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.778 16:07:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.778 16:07:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.778 16:07:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.778 16:07:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.778 16:07:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.778 16:07:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.779 16:07:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.779 16:07:49 -- paths/export.sh@5 -- # export PATH 00:25:09.779 16:07:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.779 16:07:49 -- nvmf/common.sh@47 -- # : 0 00:25:09.779 16:07:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.779 16:07:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.779 16:07:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.779 16:07:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.779 16:07:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.779 16:07:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.779 16:07:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.779 16:07:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.779 16:07:49 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:09.779 16:07:49 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:09.779 16:07:49 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:09.779 16:07:49 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:09.779 16:07:49 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:09.779 16:07:49 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:09.779 16:07:49 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:09.779 16:07:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:09.779 16:07:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.779 16:07:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:09.779 16:07:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:09.779 16:07:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:09.779 16:07:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.779 16:07:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.779 16:07:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.779 16:07:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:09.779 16:07:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:09.779 16:07:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:09.779 16:07:49 -- common/autotest_common.sh@10 -- # set +x 00:25:15.048 16:07:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:15.048 16:07:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:15.048 16:07:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:15.048 16:07:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:15.048 16:07:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:15.048 16:07:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:15.048 16:07:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:15.048 16:07:53 -- nvmf/common.sh@295 -- # net_devs=() 00:25:15.048 16:07:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:15.048 16:07:53 -- nvmf/common.sh@296 -- # e810=() 00:25:15.048 16:07:53 -- nvmf/common.sh@296 -- # local -ga e810 00:25:15.048 16:07:53 -- nvmf/common.sh@297 -- # x722=() 00:25:15.048 16:07:53 -- nvmf/common.sh@297 -- # local -ga x722 00:25:15.048 16:07:53 -- nvmf/common.sh@298 -- # mlx=() 00:25:15.048 16:07:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:15.048 16:07:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.048 16:07:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:15.048 16:07:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:15.048 16:07:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:15.048 16:07:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.048 16:07:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:15.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:15.048 16:07:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.048 16:07:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:15.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:15.048 16:07:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:15.048 16:07:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.048 16:07:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.048 16:07:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:15.048 16:07:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.048 16:07:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:15.048 Found net devices under 0000:86:00.0: cvl_0_0 00:25:15.048 16:07:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.048 16:07:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.048 16:07:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.048 16:07:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:15.048 16:07:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.048 16:07:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:15.048 Found net devices under 0000:86:00.1: cvl_0_1 00:25:15.048 16:07:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.048 16:07:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:15.048 16:07:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:15.048 16:07:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:15.048 16:07:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:15.048 16:07:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.048 16:07:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.048 16:07:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.048 16:07:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:15.048 16:07:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.048 16:07:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.048 16:07:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:15.048 16:07:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.048 16:07:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.048 16:07:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:15.048 16:07:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:15.048 16:07:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.048 16:07:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.048 16:07:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.048 16:07:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.048 16:07:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:15.048 16:07:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.048 16:07:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.048 16:07:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.048 16:07:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:15.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:25:15.048 00:25:15.048 --- 10.0.0.2 ping statistics --- 00:25:15.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.048 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:25:15.048 16:07:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:25:15.048 00:25:15.048 --- 10.0.0.1 ping statistics --- 00:25:15.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.048 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:15.048 16:07:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.048 16:07:54 -- nvmf/common.sh@411 -- # return 0 00:25:15.048 16:07:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:15.048 16:07:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.048 16:07:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:15.048 16:07:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:15.048 16:07:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.048 16:07:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:15.048 16:07:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:15.048 16:07:54 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:15.048 16:07:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:15.048 16:07:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:15.048 16:07:54 -- common/autotest_common.sh@10 -- # set +x 00:25:15.048 16:07:54 -- nvmf/common.sh@470 -- # nvmfpid=2561410 00:25:15.048 16:07:54 -- nvmf/common.sh@471 -- # waitforlisten 2561410 00:25:15.048 16:07:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:15.048 16:07:54 -- common/autotest_common.sh@817 -- # '[' -z 2561410 ']' 00:25:15.048 16:07:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.049 16:07:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:15.049 16:07:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.049 16:07:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:15.049 16:07:54 -- common/autotest_common.sh@10 -- # set +x 00:25:15.049 [2024-04-26 16:07:54.347420] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:15.049 [2024-04-26 16:07:54.347510] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.049 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.049 [2024-04-26 16:07:54.455021] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.049 [2024-04-26 16:07:54.666415] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.049 [2024-04-26 16:07:54.666463] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.049 [2024-04-26 16:07:54.666473] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.049 [2024-04-26 16:07:54.666483] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.049 [2024-04-26 16:07:54.666492] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.049 [2024-04-26 16:07:54.666526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.617 16:07:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:15.618 16:07:55 -- common/autotest_common.sh@850 -- # return 0 00:25:15.618 16:07:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:15.618 16:07:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:15.618 16:07:55 -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 16:07:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.618 16:07:55 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:15.618 16:07:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:15.618 16:07:55 -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 [2024-04-26 16:07:55.160156] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.618 [2024-04-26 16:07:55.168355] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:15.618 null0 00:25:15.618 [2024-04-26 16:07:55.200313] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.618 16:07:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:15.618 16:07:55 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2561454 00:25:15.618 16:07:55 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2561454 /tmp/host.sock 00:25:15.618 16:07:55 -- common/autotest_common.sh@817 -- # '[' -z 2561454 ']' 00:25:15.618 16:07:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:25:15.618 16:07:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:15.618 16:07:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:15.618 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:15.618 16:07:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:15.618 16:07:55 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:15.618 16:07:55 -- common/autotest_common.sh@10 -- # set +x 00:25:15.618 [2024-04-26 16:07:55.295434] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:15.618 [2024-04-26 16:07:55.295517] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561454 ] 00:25:15.877 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.877 [2024-04-26 16:07:55.400019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.137 [2024-04-26 16:07:55.622076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.396 16:07:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.396 16:07:56 -- common/autotest_common.sh@850 -- # return 0 00:25:16.396 16:07:56 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.396 16:07:56 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:16.396 16:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.396 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:16.396 16:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.396 16:07:56 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:16.396 16:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.396 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:16.964 16:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.965 16:07:56 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:16.965 16:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.965 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:17.956 [2024-04-26 16:07:57.513316] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:17.956 [2024-04-26 16:07:57.513347] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:17.956 [2024-04-26 16:07:57.513373] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:17.956 [2024-04-26 16:07:57.601665] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:18.215 [2024-04-26 16:07:57.702724] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:18.215 [2024-04-26 16:07:57.702782] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:18.215 [2024-04-26 16:07:57.702850] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:18.215 [2024-04-26 16:07:57.702870] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:18.215 [2024-04-26 16:07:57.702901] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:18.215 16:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.215 16:07:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.215 16:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.215 [2024-04-26 16:07:57.711108] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006840 was disconnected and freed. delete nvme_qpair. 00:25:18.215 16:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.215 16:07:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.215 16:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:18.215 16:07:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.215 16:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.474 16:07:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:18.474 16:07:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.409 16:07:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.409 16:07:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.409 16:07:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.409 16:07:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.409 16:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.409 16:07:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.409 16:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.409 16:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.409 16:07:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.409 16:07:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.345 16:07:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.345 16:07:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.345 16:07:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.345 16:07:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.345 16:07:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.345 16:07:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.345 16:07:59 -- common/autotest_common.sh@10 -- # set +x 00:25:20.345 16:07:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.345 16:08:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.345 16:08:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.722 16:08:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.722 16:08:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.723 16:08:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.723 16:08:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.723 16:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.723 16:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.723 16:08:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.723 16:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.723 16:08:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.723 16:08:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.659 16:08:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.659 16:08:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.659 16:08:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.659 16:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.659 16:08:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.659 16:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.659 16:08:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.659 16:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.659 16:08:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.659 16:08:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:23.596 16:08:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.596 16:08:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.596 16:08:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.596 16:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.596 16:08:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.596 16:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:23.596 16:08:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.596 [2024-04-26 16:08:03.143491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:23.596 [2024-04-26 16:08:03.143547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.596 [2024-04-26 16:08:03.143562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.596 [2024-04-26 16:08:03.143577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.596 [2024-04-26 16:08:03.143587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.596 [2024-04-26 16:08:03.143597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.596 [2024-04-26 16:08:03.143607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.596 [2024-04-26 16:08:03.143618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.596 [2024-04-26 16:08:03.143628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.596 [2024-04-26 16:08:03.143642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.596 [2024-04-26 16:08:03.143653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.596 [2024-04-26 16:08:03.143663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:25:23.596 16:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.596 [2024-04-26 16:08:03.153508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:25:23.596 [2024-04-26 16:08:03.163551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.596 16:08:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:23.596 16:08:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:24.531 16:08:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.531 16:08:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.531 16:08:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.531 16:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.531 16:08:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.531 16:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.531 16:08:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.790 [2024-04-26 16:08:04.220097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:25.726 [2024-04-26 16:08:05.244140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:25.726 [2024-04-26 16:08:05.244197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005640 with addr=10.0.0.2, port=4420 00:25:25.726 [2024-04-26 16:08:05.244219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:25:25.726 [2024-04-26 16:08:05.244791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:25:25.726 [2024-04-26 16:08:05.244828] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:25.726 [2024-04-26 16:08:05.244877] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:25.726 [2024-04-26 16:08:05.244916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.726 [2024-04-26 16:08:05.244935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.726 [2024-04-26 16:08:05.244954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.726 [2024-04-26 16:08:05.244968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.726 [2024-04-26 16:08:05.244983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.726 [2024-04-26 16:08:05.244997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.726 [2024-04-26 16:08:05.245011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.726 [2024-04-26 16:08:05.245024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.726 [2024-04-26 16:08:05.245038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:25.726 [2024-04-26 16:08:05.245053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:25.726 [2024-04-26 16:08:05.245066] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:25.726 [2024-04-26 16:08:05.245257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005240 (9): Bad file descriptor 00:25:25.727 [2024-04-26 16:08:05.246303] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:25.727 [2024-04-26 16:08:05.246330] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:25.727 16:08:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.727 16:08:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.727 16:08:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.663 16:08:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.663 16:08:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.663 16:08:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.663 16:08:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.663 16:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.663 16:08:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.663 16:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.663 16:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.663 16:08:06 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:26.663 16:08:06 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.663 16:08:06 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.922 16:08:06 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:26.922 16:08:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.922 16:08:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.922 16:08:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.922 16:08:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.922 16:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.922 16:08:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.922 16:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.922 16:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.922 16:08:06 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:26.922 16:08:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:27.859 [2024-04-26 16:08:07.256887] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:27.859 [2024-04-26 16:08:07.256912] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:27.859 [2024-04-26 16:08:07.256940] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:27.859 [2024-04-26 16:08:07.387386] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:27.859 16:08:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.859 16:08:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.859 16:08:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.859 16:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.859 16:08:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.859 16:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.859 16:08:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.859 16:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.859 16:08:07 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:27.859 16:08:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.118 [2024-04-26 16:08:07.570527] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:28.118 [2024-04-26 16:08:07.570574] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:28.118 [2024-04-26 16:08:07.570617] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:28.118 [2024-04-26 16:08:07.570637] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:28.118 [2024-04-26 16:08:07.570649] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:28.118 [2024-04-26 16:08:07.577129] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61400000a040 was disconnected and freed. delete nvme_qpair. 00:25:29.054 16:08:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.054 16:08:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.054 16:08:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.054 16:08:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.054 16:08:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.054 16:08:08 -- common/autotest_common.sh@10 -- # set +x 00:25:29.054 16:08:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.054 16:08:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.054 16:08:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:29.054 16:08:08 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:29.054 16:08:08 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2561454 00:25:29.054 16:08:08 -- common/autotest_common.sh@936 -- # '[' -z 2561454 ']' 00:25:29.054 16:08:08 -- common/autotest_common.sh@940 -- # kill -0 2561454 00:25:29.054 16:08:08 -- common/autotest_common.sh@941 -- # uname 00:25:29.054 16:08:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:29.054 16:08:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2561454 00:25:29.054 16:08:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:29.054 16:08:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:29.054 16:08:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2561454' 00:25:29.054 killing process with pid 2561454 00:25:29.054 16:08:08 -- common/autotest_common.sh@955 -- # kill 2561454 00:25:29.054 16:08:08 -- common/autotest_common.sh@960 -- # wait 2561454 00:25:29.992 16:08:09 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:29.992 16:08:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:29.992 16:08:09 -- nvmf/common.sh@117 -- # sync 00:25:29.992 16:08:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.992 16:08:09 -- nvmf/common.sh@120 -- # set +e 00:25:29.992 16:08:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.992 16:08:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.992 rmmod nvme_tcp 00:25:29.992 rmmod nvme_fabrics 00:25:29.992 rmmod nvme_keyring 00:25:30.251 16:08:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.251 16:08:09 -- nvmf/common.sh@124 -- # set -e 00:25:30.251 16:08:09 -- nvmf/common.sh@125 -- # return 0 00:25:30.251 16:08:09 -- nvmf/common.sh@478 -- # '[' -n 2561410 ']' 00:25:30.251 16:08:09 -- nvmf/common.sh@479 -- # killprocess 2561410 00:25:30.251 16:08:09 -- common/autotest_common.sh@936 -- # '[' -z 2561410 ']' 00:25:30.251 16:08:09 -- common/autotest_common.sh@940 -- # kill -0 2561410 00:25:30.251 16:08:09 -- common/autotest_common.sh@941 -- # uname 00:25:30.251 16:08:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.251 16:08:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2561410 00:25:30.251 16:08:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:30.251 16:08:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:30.251 16:08:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2561410' 00:25:30.251 killing process with pid 2561410 00:25:30.251 16:08:09 -- common/autotest_common.sh@955 -- # kill 2561410 00:25:30.251 16:08:09 -- common/autotest_common.sh@960 -- # wait 2561410 00:25:31.628 16:08:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:31.628 16:08:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:31.628 16:08:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:31.628 16:08:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.628 16:08:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.628 16:08:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.628 16:08:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.628 16:08:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.535 16:08:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.535 00:25:33.535 real 0m23.742s 00:25:33.535 user 0m30.495s 00:25:33.535 sys 0m4.997s 00:25:33.535 16:08:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:33.535 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.535 ************************************ 00:25:33.535 END TEST nvmf_discovery_remove_ifc 00:25:33.535 ************************************ 00:25:33.535 16:08:13 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:33.535 16:08:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:33.535 16:08:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.536 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.795 ************************************ 00:25:33.795 START TEST nvmf_identify_kernel_target 00:25:33.795 ************************************ 00:25:33.795 16:08:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:33.795 * Looking for test storage... 00:25:33.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.796 16:08:13 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.796 16:08:13 -- nvmf/common.sh@7 -- # uname -s 00:25:33.796 16:08:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.796 16:08:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.796 16:08:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.796 16:08:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.796 16:08:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.796 16:08:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.796 16:08:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.796 16:08:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.796 16:08:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.796 16:08:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.796 16:08:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:33.796 16:08:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:33.796 16:08:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.796 16:08:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.796 16:08:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.796 16:08:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.796 16:08:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.796 16:08:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.796 16:08:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.796 16:08:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.796 16:08:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.796 16:08:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.796 16:08:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.796 16:08:13 -- paths/export.sh@5 -- # export PATH 00:25:33.796 16:08:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.796 16:08:13 -- nvmf/common.sh@47 -- # : 0 00:25:33.796 16:08:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.796 16:08:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.796 16:08:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.796 16:08:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.796 16:08:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.796 16:08:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.796 16:08:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.796 16:08:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.796 16:08:13 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:33.796 16:08:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:33.796 16:08:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.796 16:08:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:33.796 16:08:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:33.796 16:08:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:33.796 16:08:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.796 16:08:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.796 16:08:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.796 16:08:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:33.796 16:08:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:33.796 16:08:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.796 16:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:39.084 16:08:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:39.084 16:08:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.084 16:08:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.084 16:08:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.084 16:08:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.084 16:08:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.084 16:08:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.084 16:08:18 -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.084 16:08:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.084 16:08:18 -- nvmf/common.sh@296 -- # e810=() 00:25:39.084 16:08:18 -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.084 16:08:18 -- nvmf/common.sh@297 -- # x722=() 00:25:39.084 16:08:18 -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.084 16:08:18 -- nvmf/common.sh@298 -- # mlx=() 00:25:39.084 16:08:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.084 16:08:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.084 16:08:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.084 16:08:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.084 16:08:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.084 16:08:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.084 16:08:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:39.084 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:39.084 16:08:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.084 16:08:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:39.084 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:39.084 16:08:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.084 16:08:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.084 16:08:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.084 16:08:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.084 16:08:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:39.084 16:08:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.084 16:08:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:39.084 Found net devices under 0000:86:00.0: cvl_0_0 00:25:39.084 16:08:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.084 16:08:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.084 16:08:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.084 16:08:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:39.084 16:08:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.085 16:08:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:39.085 Found net devices under 0000:86:00.1: cvl_0_1 00:25:39.085 16:08:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.085 16:08:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:39.085 16:08:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:39.085 16:08:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:39.085 16:08:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.085 16:08:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.085 16:08:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.085 16:08:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.085 16:08:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.085 16:08:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.085 16:08:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.085 16:08:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.085 16:08:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.085 16:08:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.085 16:08:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.085 16:08:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.085 16:08:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.085 16:08:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.085 16:08:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.085 16:08:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.085 16:08:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.085 16:08:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.085 16:08:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.085 16:08:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:25:39.085 00:25:39.085 --- 10.0.0.2 ping statistics --- 00:25:39.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.085 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:39.085 16:08:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:25:39.085 00:25:39.085 --- 10.0.0.1 ping statistics --- 00:25:39.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.085 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:25:39.085 16:08:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.085 16:08:18 -- nvmf/common.sh@411 -- # return 0 00:25:39.085 16:08:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:39.085 16:08:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.085 16:08:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.085 16:08:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:39.085 16:08:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:39.085 16:08:18 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:39.085 16:08:18 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:39.085 16:08:18 -- nvmf/common.sh@717 -- # local ip 00:25:39.085 16:08:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:39.085 16:08:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:39.085 16:08:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.085 16:08:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.085 16:08:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:39.085 16:08:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:39.085 16:08:18 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:39.085 16:08:18 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:39.085 16:08:18 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:39.085 16:08:18 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:25:39.085 16:08:18 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:39.085 16:08:18 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:39.085 16:08:18 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:39.085 16:08:18 -- nvmf/common.sh@628 -- # local block nvme 00:25:39.085 16:08:18 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@631 -- # modprobe nvmet 00:25:39.085 16:08:18 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:39.085 16:08:18 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:41.620 Waiting for block devices as requested 00:25:41.620 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:41.620 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:41.620 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:41.620 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:41.620 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:41.620 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:41.880 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:41.880 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:41.880 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:42.139 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:42.139 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:42.139 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:42.139 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:42.398 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:42.398 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:42.398 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:42.656 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:42.656 16:08:22 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:42.656 16:08:22 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:42.656 16:08:22 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:25:42.656 16:08:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:42.656 16:08:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:42.656 16:08:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:42.656 16:08:22 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:25:42.656 16:08:22 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:42.656 16:08:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:42.656 No valid GPT data, bailing 00:25:42.656 16:08:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:42.656 16:08:22 -- scripts/common.sh@391 -- # pt= 00:25:42.656 16:08:22 -- scripts/common.sh@392 -- # return 1 00:25:42.656 16:08:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:25:42.656 16:08:22 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:25:42.656 16:08:22 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:42.656 16:08:22 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:42.656 16:08:22 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:42.656 16:08:22 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:42.656 16:08:22 -- nvmf/common.sh@656 -- # echo 1 00:25:42.656 16:08:22 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:25:42.656 16:08:22 -- nvmf/common.sh@658 -- # echo 1 00:25:42.656 16:08:22 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:25:42.656 16:08:22 -- nvmf/common.sh@661 -- # echo tcp 00:25:42.656 16:08:22 -- nvmf/common.sh@662 -- # echo 4420 00:25:42.656 16:08:22 -- nvmf/common.sh@663 -- # echo ipv4 00:25:42.656 16:08:22 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:42.656 16:08:22 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:42.656 00:25:42.656 Discovery Log Number of Records 2, Generation counter 2 00:25:42.656 =====Discovery Log Entry 0====== 00:25:42.656 trtype: tcp 00:25:42.656 adrfam: ipv4 00:25:42.656 subtype: current discovery subsystem 00:25:42.656 treq: not specified, sq flow control disable supported 00:25:42.656 portid: 1 00:25:42.656 trsvcid: 4420 00:25:42.656 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:42.656 traddr: 10.0.0.1 00:25:42.656 eflags: none 00:25:42.656 sectype: none 00:25:42.656 =====Discovery Log Entry 1====== 00:25:42.656 trtype: tcp 00:25:42.656 adrfam: ipv4 00:25:42.656 subtype: nvme subsystem 00:25:42.656 treq: not specified, sq flow control disable supported 00:25:42.656 portid: 1 00:25:42.656 trsvcid: 4420 00:25:42.656 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:42.656 traddr: 10.0.0.1 00:25:42.656 eflags: none 00:25:42.656 sectype: none 00:25:42.656 16:08:22 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:42.656 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:42.916 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.916 ===================================================== 00:25:42.916 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:42.916 ===================================================== 00:25:42.916 Controller Capabilities/Features 00:25:42.916 ================================ 00:25:42.916 Vendor ID: 0000 00:25:42.916 Subsystem Vendor ID: 0000 00:25:42.916 Serial Number: 71c0cf1afb4b062592bd 00:25:42.916 Model Number: Linux 00:25:42.916 Firmware Version: 6.7.0-68 00:25:42.916 Recommended Arb Burst: 0 00:25:42.916 IEEE OUI Identifier: 00 00 00 00:25:42.916 Multi-path I/O 00:25:42.916 May have multiple subsystem ports: No 00:25:42.916 May have multiple controllers: No 00:25:42.916 Associated with SR-IOV VF: No 00:25:42.917 Max Data Transfer Size: Unlimited 00:25:42.917 Max Number of Namespaces: 0 00:25:42.917 Max Number of I/O Queues: 1024 00:25:42.917 NVMe Specification Version (VS): 1.3 00:25:42.917 NVMe Specification Version (Identify): 1.3 00:25:42.917 Maximum Queue Entries: 1024 00:25:42.917 Contiguous Queues Required: No 00:25:42.917 Arbitration Mechanisms Supported 00:25:42.917 Weighted Round Robin: Not Supported 00:25:42.917 Vendor Specific: Not Supported 00:25:42.917 Reset Timeout: 7500 ms 00:25:42.917 Doorbell Stride: 4 bytes 00:25:42.917 NVM Subsystem Reset: Not Supported 00:25:42.917 Command Sets Supported 00:25:42.917 NVM Command Set: Supported 00:25:42.917 Boot Partition: Not Supported 00:25:42.917 Memory Page Size Minimum: 4096 bytes 00:25:42.917 Memory Page Size Maximum: 4096 bytes 00:25:42.917 Persistent Memory Region: Not Supported 00:25:42.917 Optional Asynchronous Events Supported 00:25:42.917 Namespace Attribute Notices: Not Supported 00:25:42.917 Firmware Activation Notices: Not Supported 00:25:42.917 ANA Change Notices: Not Supported 00:25:42.917 PLE Aggregate Log Change Notices: Not Supported 00:25:42.917 LBA Status Info Alert Notices: Not Supported 00:25:42.917 EGE Aggregate Log Change Notices: Not Supported 00:25:42.917 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.917 Zone Descriptor Change Notices: Not Supported 00:25:42.917 Discovery Log Change Notices: Supported 00:25:42.917 Controller Attributes 00:25:42.917 128-bit Host Identifier: Not Supported 00:25:42.917 Non-Operational Permissive Mode: Not Supported 00:25:42.917 NVM Sets: Not Supported 00:25:42.917 Read Recovery Levels: Not Supported 00:25:42.917 Endurance Groups: Not Supported 00:25:42.917 Predictable Latency Mode: Not Supported 00:25:42.917 Traffic Based Keep ALive: Not Supported 00:25:42.917 Namespace Granularity: Not Supported 00:25:42.917 SQ Associations: Not Supported 00:25:42.917 UUID List: Not Supported 00:25:42.917 Multi-Domain Subsystem: Not Supported 00:25:42.917 Fixed Capacity Management: Not Supported 00:25:42.917 Variable Capacity Management: Not Supported 00:25:42.917 Delete Endurance Group: Not Supported 00:25:42.917 Delete NVM Set: Not Supported 00:25:42.917 Extended LBA Formats Supported: Not Supported 00:25:42.917 Flexible Data Placement Supported: Not Supported 00:25:42.917 00:25:42.917 Controller Memory Buffer Support 00:25:42.917 ================================ 00:25:42.917 Supported: No 00:25:42.917 00:25:42.917 Persistent Memory Region Support 00:25:42.917 ================================ 00:25:42.917 Supported: No 00:25:42.917 00:25:42.917 Admin Command Set Attributes 00:25:42.917 ============================ 00:25:42.917 Security Send/Receive: Not Supported 00:25:42.917 Format NVM: Not Supported 00:25:42.917 Firmware Activate/Download: Not Supported 00:25:42.917 Namespace Management: Not Supported 00:25:42.917 Device Self-Test: Not Supported 00:25:42.917 Directives: Not Supported 00:25:42.917 NVMe-MI: Not Supported 00:25:42.917 Virtualization Management: Not Supported 00:25:42.917 Doorbell Buffer Config: Not Supported 00:25:42.917 Get LBA Status Capability: Not Supported 00:25:42.917 Command & Feature Lockdown Capability: Not Supported 00:25:42.917 Abort Command Limit: 1 00:25:42.917 Async Event Request Limit: 1 00:25:42.917 Number of Firmware Slots: N/A 00:25:42.917 Firmware Slot 1 Read-Only: N/A 00:25:42.917 Firmware Activation Without Reset: N/A 00:25:42.917 Multiple Update Detection Support: N/A 00:25:42.917 Firmware Update Granularity: No Information Provided 00:25:42.917 Per-Namespace SMART Log: No 00:25:42.917 Asymmetric Namespace Access Log Page: Not Supported 00:25:42.917 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:42.917 Command Effects Log Page: Not Supported 00:25:42.917 Get Log Page Extended Data: Supported 00:25:42.917 Telemetry Log Pages: Not Supported 00:25:42.917 Persistent Event Log Pages: Not Supported 00:25:42.917 Supported Log Pages Log Page: May Support 00:25:42.917 Commands Supported & Effects Log Page: Not Supported 00:25:42.917 Feature Identifiers & Effects Log Page:May Support 00:25:42.917 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.917 Data Area 4 for Telemetry Log: Not Supported 00:25:42.917 Error Log Page Entries Supported: 1 00:25:42.917 Keep Alive: Not Supported 00:25:42.917 00:25:42.917 NVM Command Set Attributes 00:25:42.917 ========================== 00:25:42.917 Submission Queue Entry Size 00:25:42.917 Max: 1 00:25:42.917 Min: 1 00:25:42.917 Completion Queue Entry Size 00:25:42.917 Max: 1 00:25:42.917 Min: 1 00:25:42.917 Number of Namespaces: 0 00:25:42.917 Compare Command: Not Supported 00:25:42.917 Write Uncorrectable Command: Not Supported 00:25:42.917 Dataset Management Command: Not Supported 00:25:42.917 Write Zeroes Command: Not Supported 00:25:42.917 Set Features Save Field: Not Supported 00:25:42.917 Reservations: Not Supported 00:25:42.917 Timestamp: Not Supported 00:25:42.917 Copy: Not Supported 00:25:42.917 Volatile Write Cache: Not Present 00:25:42.917 Atomic Write Unit (Normal): 1 00:25:42.917 Atomic Write Unit (PFail): 1 00:25:42.917 Atomic Compare & Write Unit: 1 00:25:42.917 Fused Compare & Write: Not Supported 00:25:42.917 Scatter-Gather List 00:25:42.917 SGL Command Set: Supported 00:25:42.917 SGL Keyed: Not Supported 00:25:42.917 SGL Bit Bucket Descriptor: Not Supported 00:25:42.917 SGL Metadata Pointer: Not Supported 00:25:42.917 Oversized SGL: Not Supported 00:25:42.917 SGL Metadata Address: Not Supported 00:25:42.917 SGL Offset: Supported 00:25:42.917 Transport SGL Data Block: Not Supported 00:25:42.917 Replay Protected Memory Block: Not Supported 00:25:42.917 00:25:42.917 Firmware Slot Information 00:25:42.917 ========================= 00:25:42.917 Active slot: 0 00:25:42.917 00:25:42.917 00:25:42.917 Error Log 00:25:42.917 ========= 00:25:42.917 00:25:42.917 Active Namespaces 00:25:42.917 ================= 00:25:42.917 Discovery Log Page 00:25:42.917 ================== 00:25:42.917 Generation Counter: 2 00:25:42.917 Number of Records: 2 00:25:42.917 Record Format: 0 00:25:42.917 00:25:42.917 Discovery Log Entry 0 00:25:42.917 ---------------------- 00:25:42.917 Transport Type: 3 (TCP) 00:25:42.917 Address Family: 1 (IPv4) 00:25:42.917 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:42.917 Entry Flags: 00:25:42.917 Duplicate Returned Information: 0 00:25:42.917 Explicit Persistent Connection Support for Discovery: 0 00:25:42.917 Transport Requirements: 00:25:42.917 Secure Channel: Not Specified 00:25:42.917 Port ID: 1 (0x0001) 00:25:42.917 Controller ID: 65535 (0xffff) 00:25:42.917 Admin Max SQ Size: 32 00:25:42.917 Transport Service Identifier: 4420 00:25:42.917 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:42.917 Transport Address: 10.0.0.1 00:25:42.917 Discovery Log Entry 1 00:25:42.917 ---------------------- 00:25:42.917 Transport Type: 3 (TCP) 00:25:42.917 Address Family: 1 (IPv4) 00:25:42.917 Subsystem Type: 2 (NVM Subsystem) 00:25:42.917 Entry Flags: 00:25:42.917 Duplicate Returned Information: 0 00:25:42.917 Explicit Persistent Connection Support for Discovery: 0 00:25:42.917 Transport Requirements: 00:25:42.917 Secure Channel: Not Specified 00:25:42.917 Port ID: 1 (0x0001) 00:25:42.917 Controller ID: 65535 (0xffff) 00:25:42.917 Admin Max SQ Size: 32 00:25:42.917 Transport Service Identifier: 4420 00:25:42.917 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:42.917 Transport Address: 10.0.0.1 00:25:42.917 16:08:22 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:42.917 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.917 get_feature(0x01) failed 00:25:42.917 get_feature(0x02) failed 00:25:42.917 get_feature(0x04) failed 00:25:42.917 ===================================================== 00:25:42.917 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:42.917 ===================================================== 00:25:42.917 Controller Capabilities/Features 00:25:42.917 ================================ 00:25:42.917 Vendor ID: 0000 00:25:42.917 Subsystem Vendor ID: 0000 00:25:42.917 Serial Number: b864920e77f768072c2a 00:25:42.918 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:42.918 Firmware Version: 6.7.0-68 00:25:42.918 Recommended Arb Burst: 6 00:25:42.918 IEEE OUI Identifier: 00 00 00 00:25:42.918 Multi-path I/O 00:25:42.918 May have multiple subsystem ports: Yes 00:25:42.918 May have multiple controllers: Yes 00:25:42.918 Associated with SR-IOV VF: No 00:25:42.918 Max Data Transfer Size: Unlimited 00:25:42.918 Max Number of Namespaces: 1024 00:25:42.918 Max Number of I/O Queues: 128 00:25:42.918 NVMe Specification Version (VS): 1.3 00:25:42.918 NVMe Specification Version (Identify): 1.3 00:25:42.918 Maximum Queue Entries: 1024 00:25:42.918 Contiguous Queues Required: No 00:25:42.918 Arbitration Mechanisms Supported 00:25:42.918 Weighted Round Robin: Not Supported 00:25:42.918 Vendor Specific: Not Supported 00:25:42.918 Reset Timeout: 7500 ms 00:25:42.918 Doorbell Stride: 4 bytes 00:25:42.918 NVM Subsystem Reset: Not Supported 00:25:42.918 Command Sets Supported 00:25:42.918 NVM Command Set: Supported 00:25:42.918 Boot Partition: Not Supported 00:25:42.918 Memory Page Size Minimum: 4096 bytes 00:25:42.918 Memory Page Size Maximum: 4096 bytes 00:25:42.918 Persistent Memory Region: Not Supported 00:25:42.918 Optional Asynchronous Events Supported 00:25:42.918 Namespace Attribute Notices: Supported 00:25:42.918 Firmware Activation Notices: Not Supported 00:25:42.918 ANA Change Notices: Supported 00:25:42.918 PLE Aggregate Log Change Notices: Not Supported 00:25:42.918 LBA Status Info Alert Notices: Not Supported 00:25:42.918 EGE Aggregate Log Change Notices: Not Supported 00:25:42.918 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.918 Zone Descriptor Change Notices: Not Supported 00:25:42.918 Discovery Log Change Notices: Not Supported 00:25:42.918 Controller Attributes 00:25:42.918 128-bit Host Identifier: Supported 00:25:42.918 Non-Operational Permissive Mode: Not Supported 00:25:42.918 NVM Sets: Not Supported 00:25:42.918 Read Recovery Levels: Not Supported 00:25:42.918 Endurance Groups: Not Supported 00:25:42.918 Predictable Latency Mode: Not Supported 00:25:42.918 Traffic Based Keep ALive: Supported 00:25:42.918 Namespace Granularity: Not Supported 00:25:42.918 SQ Associations: Not Supported 00:25:42.918 UUID List: Not Supported 00:25:42.918 Multi-Domain Subsystem: Not Supported 00:25:42.918 Fixed Capacity Management: Not Supported 00:25:42.918 Variable Capacity Management: Not Supported 00:25:42.918 Delete Endurance Group: Not Supported 00:25:42.918 Delete NVM Set: Not Supported 00:25:42.918 Extended LBA Formats Supported: Not Supported 00:25:42.918 Flexible Data Placement Supported: Not Supported 00:25:42.918 00:25:42.918 Controller Memory Buffer Support 00:25:42.918 ================================ 00:25:42.918 Supported: No 00:25:42.918 00:25:42.918 Persistent Memory Region Support 00:25:42.918 ================================ 00:25:42.918 Supported: No 00:25:42.918 00:25:42.918 Admin Command Set Attributes 00:25:42.918 ============================ 00:25:42.918 Security Send/Receive: Not Supported 00:25:42.918 Format NVM: Not Supported 00:25:42.918 Firmware Activate/Download: Not Supported 00:25:42.918 Namespace Management: Not Supported 00:25:42.918 Device Self-Test: Not Supported 00:25:42.918 Directives: Not Supported 00:25:42.918 NVMe-MI: Not Supported 00:25:42.918 Virtualization Management: Not Supported 00:25:42.918 Doorbell Buffer Config: Not Supported 00:25:42.918 Get LBA Status Capability: Not Supported 00:25:42.918 Command & Feature Lockdown Capability: Not Supported 00:25:42.918 Abort Command Limit: 4 00:25:42.918 Async Event Request Limit: 4 00:25:42.918 Number of Firmware Slots: N/A 00:25:42.918 Firmware Slot 1 Read-Only: N/A 00:25:42.918 Firmware Activation Without Reset: N/A 00:25:42.918 Multiple Update Detection Support: N/A 00:25:42.918 Firmware Update Granularity: No Information Provided 00:25:42.918 Per-Namespace SMART Log: Yes 00:25:42.918 Asymmetric Namespace Access Log Page: Supported 00:25:42.918 ANA Transition Time : 10 sec 00:25:42.918 00:25:42.918 Asymmetric Namespace Access Capabilities 00:25:42.918 ANA Optimized State : Supported 00:25:42.918 ANA Non-Optimized State : Supported 00:25:42.918 ANA Inaccessible State : Supported 00:25:42.918 ANA Persistent Loss State : Supported 00:25:42.918 ANA Change State : Supported 00:25:42.918 ANAGRPID is not changed : No 00:25:42.918 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:42.918 00:25:42.918 ANA Group Identifier Maximum : 128 00:25:42.918 Number of ANA Group Identifiers : 128 00:25:42.918 Max Number of Allowed Namespaces : 1024 00:25:42.918 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:42.918 Command Effects Log Page: Supported 00:25:42.918 Get Log Page Extended Data: Supported 00:25:42.918 Telemetry Log Pages: Not Supported 00:25:42.918 Persistent Event Log Pages: Not Supported 00:25:42.918 Supported Log Pages Log Page: May Support 00:25:42.918 Commands Supported & Effects Log Page: Not Supported 00:25:42.918 Feature Identifiers & Effects Log Page:May Support 00:25:42.918 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.918 Data Area 4 for Telemetry Log: Not Supported 00:25:42.918 Error Log Page Entries Supported: 128 00:25:42.918 Keep Alive: Supported 00:25:42.918 Keep Alive Granularity: 1000 ms 00:25:42.918 00:25:42.918 NVM Command Set Attributes 00:25:42.918 ========================== 00:25:42.918 Submission Queue Entry Size 00:25:42.918 Max: 64 00:25:42.918 Min: 64 00:25:42.918 Completion Queue Entry Size 00:25:42.918 Max: 16 00:25:42.918 Min: 16 00:25:42.918 Number of Namespaces: 1024 00:25:42.918 Compare Command: Not Supported 00:25:42.918 Write Uncorrectable Command: Not Supported 00:25:42.918 Dataset Management Command: Supported 00:25:42.918 Write Zeroes Command: Supported 00:25:42.918 Set Features Save Field: Not Supported 00:25:42.918 Reservations: Not Supported 00:25:42.918 Timestamp: Not Supported 00:25:42.918 Copy: Not Supported 00:25:42.918 Volatile Write Cache: Present 00:25:42.918 Atomic Write Unit (Normal): 1 00:25:42.918 Atomic Write Unit (PFail): 1 00:25:42.918 Atomic Compare & Write Unit: 1 00:25:42.918 Fused Compare & Write: Not Supported 00:25:42.918 Scatter-Gather List 00:25:42.918 SGL Command Set: Supported 00:25:42.918 SGL Keyed: Not Supported 00:25:42.918 SGL Bit Bucket Descriptor: Not Supported 00:25:42.918 SGL Metadata Pointer: Not Supported 00:25:42.918 Oversized SGL: Not Supported 00:25:42.918 SGL Metadata Address: Not Supported 00:25:42.918 SGL Offset: Supported 00:25:42.918 Transport SGL Data Block: Not Supported 00:25:42.918 Replay Protected Memory Block: Not Supported 00:25:42.918 00:25:42.918 Firmware Slot Information 00:25:42.918 ========================= 00:25:42.918 Active slot: 0 00:25:42.918 00:25:42.918 Asymmetric Namespace Access 00:25:42.918 =========================== 00:25:42.918 Change Count : 0 00:25:42.918 Number of ANA Group Descriptors : 1 00:25:42.918 ANA Group Descriptor : 0 00:25:42.918 ANA Group ID : 1 00:25:42.918 Number of NSID Values : 1 00:25:42.918 Change Count : 0 00:25:42.918 ANA State : 1 00:25:42.918 Namespace Identifier : 1 00:25:42.918 00:25:42.918 Commands Supported and Effects 00:25:42.918 ============================== 00:25:42.918 Admin Commands 00:25:42.918 -------------- 00:25:42.918 Get Log Page (02h): Supported 00:25:42.918 Identify (06h): Supported 00:25:42.918 Abort (08h): Supported 00:25:42.918 Set Features (09h): Supported 00:25:42.918 Get Features (0Ah): Supported 00:25:42.918 Asynchronous Event Request (0Ch): Supported 00:25:42.918 Keep Alive (18h): Supported 00:25:42.918 I/O Commands 00:25:42.918 ------------ 00:25:42.918 Flush (00h): Supported 00:25:42.918 Write (01h): Supported LBA-Change 00:25:42.918 Read (02h): Supported 00:25:42.918 Write Zeroes (08h): Supported LBA-Change 00:25:42.918 Dataset Management (09h): Supported 00:25:42.918 00:25:42.918 Error Log 00:25:42.918 ========= 00:25:42.918 Entry: 0 00:25:42.918 Error Count: 0x3 00:25:42.918 Submission Queue Id: 0x0 00:25:42.918 Command Id: 0x5 00:25:42.918 Phase Bit: 0 00:25:42.918 Status Code: 0x2 00:25:42.918 Status Code Type: 0x0 00:25:42.918 Do Not Retry: 1 00:25:42.918 Error Location: 0x28 00:25:42.918 LBA: 0x0 00:25:42.918 Namespace: 0x0 00:25:42.918 Vendor Log Page: 0x0 00:25:42.918 ----------- 00:25:42.918 Entry: 1 00:25:42.918 Error Count: 0x2 00:25:42.919 Submission Queue Id: 0x0 00:25:42.919 Command Id: 0x5 00:25:42.919 Phase Bit: 0 00:25:42.919 Status Code: 0x2 00:25:42.919 Status Code Type: 0x0 00:25:42.919 Do Not Retry: 1 00:25:42.919 Error Location: 0x28 00:25:42.919 LBA: 0x0 00:25:42.919 Namespace: 0x0 00:25:42.919 Vendor Log Page: 0x0 00:25:42.919 ----------- 00:25:42.919 Entry: 2 00:25:42.919 Error Count: 0x1 00:25:42.919 Submission Queue Id: 0x0 00:25:42.919 Command Id: 0x4 00:25:42.919 Phase Bit: 0 00:25:42.919 Status Code: 0x2 00:25:42.919 Status Code Type: 0x0 00:25:42.919 Do Not Retry: 1 00:25:42.919 Error Location: 0x28 00:25:42.919 LBA: 0x0 00:25:42.919 Namespace: 0x0 00:25:42.919 Vendor Log Page: 0x0 00:25:42.919 00:25:42.919 Number of Queues 00:25:42.919 ================ 00:25:42.919 Number of I/O Submission Queues: 128 00:25:42.919 Number of I/O Completion Queues: 128 00:25:42.919 00:25:42.919 ZNS Specific Controller Data 00:25:42.919 ============================ 00:25:42.919 Zone Append Size Limit: 0 00:25:42.919 00:25:42.919 00:25:42.919 Active Namespaces 00:25:42.919 ================= 00:25:42.919 get_feature(0x05) failed 00:25:42.919 Namespace ID:1 00:25:42.919 Command Set Identifier: NVM (00h) 00:25:42.919 Deallocate: Supported 00:25:42.919 Deallocated/Unwritten Error: Not Supported 00:25:42.919 Deallocated Read Value: Unknown 00:25:42.919 Deallocate in Write Zeroes: Not Supported 00:25:42.919 Deallocated Guard Field: 0xFFFF 00:25:42.919 Flush: Supported 00:25:42.919 Reservation: Not Supported 00:25:42.919 Namespace Sharing Capabilities: Multiple Controllers 00:25:42.919 Size (in LBAs): 1953525168 (931GiB) 00:25:42.919 Capacity (in LBAs): 1953525168 (931GiB) 00:25:42.919 Utilization (in LBAs): 1953525168 (931GiB) 00:25:42.919 UUID: e8bb0c4f-272c-4ea1-b3d1-1319c630dcbd 00:25:42.919 Thin Provisioning: Not Supported 00:25:42.919 Per-NS Atomic Units: Yes 00:25:42.919 Atomic Boundary Size (Normal): 0 00:25:42.919 Atomic Boundary Size (PFail): 0 00:25:42.919 Atomic Boundary Offset: 0 00:25:42.919 NGUID/EUI64 Never Reused: No 00:25:42.919 ANA group ID: 1 00:25:42.919 Namespace Write Protected: No 00:25:42.919 Number of LBA Formats: 1 00:25:42.919 Current LBA Format: LBA Format #00 00:25:42.919 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:42.919 00:25:42.919 16:08:22 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:42.919 16:08:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:42.919 16:08:22 -- nvmf/common.sh@117 -- # sync 00:25:42.919 16:08:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.919 16:08:22 -- nvmf/common.sh@120 -- # set +e 00:25:42.919 16:08:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.919 16:08:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.179 rmmod nvme_tcp 00:25:43.179 rmmod nvme_fabrics 00:25:43.179 16:08:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.179 16:08:22 -- nvmf/common.sh@124 -- # set -e 00:25:43.179 16:08:22 -- nvmf/common.sh@125 -- # return 0 00:25:43.179 16:08:22 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:25:43.179 16:08:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:43.179 16:08:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:43.179 16:08:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:43.179 16:08:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.179 16:08:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.179 16:08:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.179 16:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.179 16:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.083 16:08:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.083 16:08:24 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:45.083 16:08:24 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:45.083 16:08:24 -- nvmf/common.sh@675 -- # echo 0 00:25:45.083 16:08:24 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:45.083 16:08:24 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:45.083 16:08:24 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:45.083 16:08:24 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:45.083 16:08:24 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:45.083 16:08:24 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:45.083 16:08:24 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.615 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.615 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.615 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.615 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:47.875 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.827 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:48.827 00:25:48.827 real 0m15.136s 00:25:48.827 user 0m3.669s 00:25:48.827 sys 0m7.744s 00:25:48.827 16:08:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:48.827 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:25:48.827 ************************************ 00:25:48.827 END TEST nvmf_identify_kernel_target 00:25:48.827 ************************************ 00:25:48.827 16:08:28 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:48.827 16:08:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:48.827 16:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:48.827 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.086 ************************************ 00:25:49.086 START TEST nvmf_auth 00:25:49.086 ************************************ 00:25:49.086 16:08:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:49.086 * Looking for test storage... 00:25:49.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.086 16:08:28 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.086 16:08:28 -- nvmf/common.sh@7 -- # uname -s 00:25:49.086 16:08:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.086 16:08:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.086 16:08:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.086 16:08:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.086 16:08:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.086 16:08:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.086 16:08:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.086 16:08:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.086 16:08:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.086 16:08:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.086 16:08:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:49.086 16:08:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:49.086 16:08:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.086 16:08:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.086 16:08:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.086 16:08:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.086 16:08:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.086 16:08:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.086 16:08:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.086 16:08:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.086 16:08:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.086 16:08:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.086 16:08:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.086 16:08:28 -- paths/export.sh@5 -- # export PATH 00:25:49.086 16:08:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.086 16:08:28 -- nvmf/common.sh@47 -- # : 0 00:25:49.086 16:08:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.086 16:08:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.086 16:08:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.086 16:08:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.086 16:08:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.086 16:08:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.086 16:08:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.086 16:08:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:49.086 16:08:28 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:49.086 16:08:28 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:49.086 16:08:28 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:49.086 16:08:28 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:49.086 16:08:28 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:49.086 16:08:28 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:49.086 16:08:28 -- host/auth.sh@21 -- # keys=() 00:25:49.086 16:08:28 -- host/auth.sh@77 -- # nvmftestinit 00:25:49.086 16:08:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:49.086 16:08:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.086 16:08:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:49.086 16:08:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:49.086 16:08:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:49.086 16:08:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.086 16:08:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.086 16:08:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.086 16:08:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:49.086 16:08:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:49.086 16:08:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:49.086 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:25:54.520 16:08:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:54.520 16:08:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:54.520 16:08:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:54.520 16:08:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:54.520 16:08:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:54.520 16:08:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:54.520 16:08:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:54.520 16:08:33 -- nvmf/common.sh@295 -- # net_devs=() 00:25:54.520 16:08:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:54.520 16:08:33 -- nvmf/common.sh@296 -- # e810=() 00:25:54.520 16:08:33 -- nvmf/common.sh@296 -- # local -ga e810 00:25:54.520 16:08:33 -- nvmf/common.sh@297 -- # x722=() 00:25:54.520 16:08:33 -- nvmf/common.sh@297 -- # local -ga x722 00:25:54.520 16:08:33 -- nvmf/common.sh@298 -- # mlx=() 00:25:54.520 16:08:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:54.520 16:08:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.520 16:08:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:54.520 16:08:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:54.520 16:08:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:54.521 16:08:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:54.521 16:08:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.521 16:08:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:54.521 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:54.521 16:08:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.521 16:08:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:54.521 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:54.521 16:08:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:54.521 16:08:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.521 16:08:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.521 16:08:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:54.521 16:08:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.521 16:08:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:54.521 Found net devices under 0000:86:00.0: cvl_0_0 00:25:54.521 16:08:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.521 16:08:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.521 16:08:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.521 16:08:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:54.521 16:08:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.521 16:08:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:54.521 Found net devices under 0000:86:00.1: cvl_0_1 00:25:54.521 16:08:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.521 16:08:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:54.521 16:08:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:54.521 16:08:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:54.521 16:08:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:54.521 16:08:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.521 16:08:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.521 16:08:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.521 16:08:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:54.521 16:08:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.521 16:08:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.521 16:08:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:54.521 16:08:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.521 16:08:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.521 16:08:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:54.521 16:08:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:54.521 16:08:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.521 16:08:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.521 16:08:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.521 16:08:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.521 16:08:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:54.521 16:08:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.521 16:08:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.521 16:08:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.521 16:08:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:54.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:25:54.521 00:25:54.521 --- 10.0.0.2 ping statistics --- 00:25:54.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.521 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:25:54.521 16:08:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:25:54.521 00:25:54.521 --- 10.0.0.1 ping statistics --- 00:25:54.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.521 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:25:54.521 16:08:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.521 16:08:34 -- nvmf/common.sh@411 -- # return 0 00:25:54.521 16:08:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:54.521 16:08:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.521 16:08:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:54.521 16:08:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:54.521 16:08:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.521 16:08:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:54.521 16:08:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:54.521 16:08:34 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:25:54.521 16:08:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:54.521 16:08:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:54.521 16:08:34 -- common/autotest_common.sh@10 -- # set +x 00:25:54.521 16:08:34 -- nvmf/common.sh@470 -- # nvmfpid=2573742 00:25:54.521 16:08:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:54.521 16:08:34 -- nvmf/common.sh@471 -- # waitforlisten 2573742 00:25:54.521 16:08:34 -- common/autotest_common.sh@817 -- # '[' -z 2573742 ']' 00:25:54.521 16:08:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.521 16:08:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:54.521 16:08:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.521 16:08:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:54.521 16:08:34 -- common/autotest_common.sh@10 -- # set +x 00:25:55.460 16:08:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:55.460 16:08:34 -- common/autotest_common.sh@850 -- # return 0 00:25:55.460 16:08:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:55.460 16:08:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:55.460 16:08:34 -- common/autotest_common.sh@10 -- # set +x 00:25:55.460 16:08:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.460 16:08:34 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:55.460 16:08:34 -- host/auth.sh@81 -- # gen_key null 32 00:25:55.460 16:08:34 -- host/auth.sh@53 -- # local digest len file key 00:25:55.460 16:08:34 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.460 16:08:34 -- host/auth.sh@54 -- # local -A digests 00:25:55.460 16:08:34 -- host/auth.sh@56 -- # digest=null 00:25:55.460 16:08:34 -- host/auth.sh@56 -- # len=32 00:25:55.460 16:08:34 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.460 16:08:34 -- host/auth.sh@57 -- # key=d76149ad57a9f052a1dfcc41ec52be82 00:25:55.460 16:08:34 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:25:55.460 16:08:34 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.ycQ 00:25:55.460 16:08:34 -- host/auth.sh@59 -- # format_dhchap_key d76149ad57a9f052a1dfcc41ec52be82 0 00:25:55.460 16:08:34 -- nvmf/common.sh@708 -- # format_key DHHC-1 d76149ad57a9f052a1dfcc41ec52be82 0 00:25:55.460 16:08:34 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:55.460 16:08:34 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:55.460 16:08:34 -- nvmf/common.sh@693 -- # key=d76149ad57a9f052a1dfcc41ec52be82 00:25:55.460 16:08:34 -- nvmf/common.sh@693 -- # digest=0 00:25:55.460 16:08:34 -- nvmf/common.sh@694 -- # python - 00:25:55.460 16:08:35 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.ycQ 00:25:55.460 16:08:35 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.ycQ 00:25:55.460 16:08:35 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.ycQ 00:25:55.460 16:08:35 -- host/auth.sh@82 -- # gen_key null 48 00:25:55.460 16:08:35 -- host/auth.sh@53 -- # local digest len file key 00:25:55.460 16:08:35 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.460 16:08:35 -- host/auth.sh@54 -- # local -A digests 00:25:55.460 16:08:35 -- host/auth.sh@56 -- # digest=null 00:25:55.460 16:08:35 -- host/auth.sh@56 -- # len=48 00:25:55.460 16:08:35 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:55.460 16:08:35 -- host/auth.sh@57 -- # key=fc543073bf2755659854d11e662a7f13f3759823f1dfdd57 00:25:55.460 16:08:35 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:25:55.460 16:08:35 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.sU0 00:25:55.460 16:08:35 -- host/auth.sh@59 -- # format_dhchap_key fc543073bf2755659854d11e662a7f13f3759823f1dfdd57 0 00:25:55.460 16:08:35 -- nvmf/common.sh@708 -- # format_key DHHC-1 fc543073bf2755659854d11e662a7f13f3759823f1dfdd57 0 00:25:55.460 16:08:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:55.460 16:08:35 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:55.461 16:08:35 -- nvmf/common.sh@693 -- # key=fc543073bf2755659854d11e662a7f13f3759823f1dfdd57 00:25:55.461 16:08:35 -- nvmf/common.sh@693 -- # digest=0 00:25:55.461 16:08:35 -- nvmf/common.sh@694 -- # python - 00:25:55.461 16:08:35 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.sU0 00:25:55.461 16:08:35 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.sU0 00:25:55.461 16:08:35 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.sU0 00:25:55.461 16:08:35 -- host/auth.sh@83 -- # gen_key sha256 32 00:25:55.461 16:08:35 -- host/auth.sh@53 -- # local digest len file key 00:25:55.461 16:08:35 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.461 16:08:35 -- host/auth.sh@54 -- # local -A digests 00:25:55.461 16:08:35 -- host/auth.sh@56 -- # digest=sha256 00:25:55.461 16:08:35 -- host/auth.sh@56 -- # len=32 00:25:55.461 16:08:35 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:55.461 16:08:35 -- host/auth.sh@57 -- # key=55de5cc8cb0d1b137a23d95917c35d95 00:25:55.461 16:08:35 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:25:55.461 16:08:35 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.j4s 00:25:55.461 16:08:35 -- host/auth.sh@59 -- # format_dhchap_key 55de5cc8cb0d1b137a23d95917c35d95 1 00:25:55.461 16:08:35 -- nvmf/common.sh@708 -- # format_key DHHC-1 55de5cc8cb0d1b137a23d95917c35d95 1 00:25:55.461 16:08:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:55.461 16:08:35 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:55.461 16:08:35 -- nvmf/common.sh@693 -- # key=55de5cc8cb0d1b137a23d95917c35d95 00:25:55.461 16:08:35 -- nvmf/common.sh@693 -- # digest=1 00:25:55.461 16:08:35 -- nvmf/common.sh@694 -- # python - 00:25:55.461 16:08:35 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.j4s 00:25:55.461 16:08:35 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.j4s 00:25:55.720 16:08:35 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.j4s 00:25:55.720 16:08:35 -- host/auth.sh@84 -- # gen_key sha384 48 00:25:55.720 16:08:35 -- host/auth.sh@53 -- # local digest len file key 00:25:55.720 16:08:35 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.720 16:08:35 -- host/auth.sh@54 -- # local -A digests 00:25:55.720 16:08:35 -- host/auth.sh@56 -- # digest=sha384 00:25:55.720 16:08:35 -- host/auth.sh@56 -- # len=48 00:25:55.720 16:08:35 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:55.720 16:08:35 -- host/auth.sh@57 -- # key=26bf7f7269a1629dd8391788675637e84dc10f23a4a9657a 00:25:55.720 16:08:35 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:25:55.720 16:08:35 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.Y73 00:25:55.720 16:08:35 -- host/auth.sh@59 -- # format_dhchap_key 26bf7f7269a1629dd8391788675637e84dc10f23a4a9657a 2 00:25:55.720 16:08:35 -- nvmf/common.sh@708 -- # format_key DHHC-1 26bf7f7269a1629dd8391788675637e84dc10f23a4a9657a 2 00:25:55.720 16:08:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:55.720 16:08:35 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:55.720 16:08:35 -- nvmf/common.sh@693 -- # key=26bf7f7269a1629dd8391788675637e84dc10f23a4a9657a 00:25:55.720 16:08:35 -- nvmf/common.sh@693 -- # digest=2 00:25:55.720 16:08:35 -- nvmf/common.sh@694 -- # python - 00:25:55.720 16:08:35 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.Y73 00:25:55.720 16:08:35 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.Y73 00:25:55.720 16:08:35 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.Y73 00:25:55.720 16:08:35 -- host/auth.sh@85 -- # gen_key sha512 64 00:25:55.720 16:08:35 -- host/auth.sh@53 -- # local digest len file key 00:25:55.720 16:08:35 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:55.720 16:08:35 -- host/auth.sh@54 -- # local -A digests 00:25:55.720 16:08:35 -- host/auth.sh@56 -- # digest=sha512 00:25:55.720 16:08:35 -- host/auth.sh@56 -- # len=64 00:25:55.720 16:08:35 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:55.720 16:08:35 -- host/auth.sh@57 -- # key=ab6c2372cd6e6974d58bd71f54bf45f247035deb02252adcc2c76f51ae5d492a 00:25:55.720 16:08:35 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:25:55.720 16:08:35 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.psc 00:25:55.720 16:08:35 -- host/auth.sh@59 -- # format_dhchap_key ab6c2372cd6e6974d58bd71f54bf45f247035deb02252adcc2c76f51ae5d492a 3 00:25:55.720 16:08:35 -- nvmf/common.sh@708 -- # format_key DHHC-1 ab6c2372cd6e6974d58bd71f54bf45f247035deb02252adcc2c76f51ae5d492a 3 00:25:55.720 16:08:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:55.720 16:08:35 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:55.720 16:08:35 -- nvmf/common.sh@693 -- # key=ab6c2372cd6e6974d58bd71f54bf45f247035deb02252adcc2c76f51ae5d492a 00:25:55.720 16:08:35 -- nvmf/common.sh@693 -- # digest=3 00:25:55.720 16:08:35 -- nvmf/common.sh@694 -- # python - 00:25:55.720 16:08:35 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.psc 00:25:55.720 16:08:35 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.psc 00:25:55.720 16:08:35 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.psc 00:25:55.720 16:08:35 -- host/auth.sh@87 -- # waitforlisten 2573742 00:25:55.720 16:08:35 -- common/autotest_common.sh@817 -- # '[' -z 2573742 ']' 00:25:55.720 16:08:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.720 16:08:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:55.720 16:08:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.720 16:08:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:55.720 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:25:55.979 16:08:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:55.980 16:08:35 -- common/autotest_common.sh@850 -- # return 0 00:25:55.980 16:08:35 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:55.980 16:08:35 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ycQ 00:25:55.980 16:08:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.980 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:25:55.980 16:08:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.980 16:08:35 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:55.980 16:08:35 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.sU0 00:25:55.980 16:08:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.980 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:25:55.980 16:08:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.980 16:08:35 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:55.980 16:08:35 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.j4s 00:25:55.980 16:08:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.980 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:25:55.980 16:08:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.980 16:08:35 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:55.980 16:08:35 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Y73 00:25:55.980 16:08:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.980 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:25:55.980 16:08:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.980 16:08:35 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:55.980 16:08:35 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.psc 00:25:55.980 16:08:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.980 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:25:55.980 16:08:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.980 16:08:35 -- host/auth.sh@92 -- # nvmet_auth_init 00:25:55.980 16:08:35 -- host/auth.sh@35 -- # get_main_ns_ip 00:25:55.980 16:08:35 -- nvmf/common.sh@717 -- # local ip 00:25:55.980 16:08:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:55.980 16:08:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:55.980 16:08:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.980 16:08:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.980 16:08:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:55.980 16:08:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.980 16:08:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:55.980 16:08:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:55.980 16:08:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:55.980 16:08:35 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:55.980 16:08:35 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:55.980 16:08:35 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:25:55.980 16:08:35 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:55.980 16:08:35 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:55.980 16:08:35 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:55.980 16:08:35 -- nvmf/common.sh@628 -- # local block nvme 00:25:55.980 16:08:35 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:25:55.980 16:08:35 -- nvmf/common.sh@631 -- # modprobe nvmet 00:25:55.980 16:08:35 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:55.980 16:08:35 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:58.512 Waiting for block devices as requested 00:25:58.512 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:58.512 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:58.512 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:58.771 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:58.771 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:58.771 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:58.771 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:59.029 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:59.029 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:59.029 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:59.029 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:59.288 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:59.288 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:59.288 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:59.547 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:59.547 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:59.547 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:00.114 16:08:39 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:00.114 16:08:39 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:00.114 16:08:39 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:00.114 16:08:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:00.114 16:08:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:00.114 16:08:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:00.114 16:08:39 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:00.114 16:08:39 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:00.114 16:08:39 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:00.114 No valid GPT data, bailing 00:26:00.114 16:08:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:00.114 16:08:39 -- scripts/common.sh@391 -- # pt= 00:26:00.114 16:08:39 -- scripts/common.sh@392 -- # return 1 00:26:00.114 16:08:39 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:00.114 16:08:39 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:26:00.114 16:08:39 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.114 16:08:39 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:00.114 16:08:39 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:00.114 16:08:39 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:00.114 16:08:39 -- nvmf/common.sh@656 -- # echo 1 00:26:00.114 16:08:39 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:26:00.114 16:08:39 -- nvmf/common.sh@658 -- # echo 1 00:26:00.114 16:08:39 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:00.114 16:08:39 -- nvmf/common.sh@661 -- # echo tcp 00:26:00.114 16:08:39 -- nvmf/common.sh@662 -- # echo 4420 00:26:00.114 16:08:39 -- nvmf/common.sh@663 -- # echo ipv4 00:26:00.114 16:08:39 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:00.114 16:08:39 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:00.373 00:26:00.373 Discovery Log Number of Records 2, Generation counter 2 00:26:00.373 =====Discovery Log Entry 0====== 00:26:00.373 trtype: tcp 00:26:00.373 adrfam: ipv4 00:26:00.373 subtype: current discovery subsystem 00:26:00.373 treq: not specified, sq flow control disable supported 00:26:00.373 portid: 1 00:26:00.373 trsvcid: 4420 00:26:00.373 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:00.373 traddr: 10.0.0.1 00:26:00.373 eflags: none 00:26:00.373 sectype: none 00:26:00.373 =====Discovery Log Entry 1====== 00:26:00.373 trtype: tcp 00:26:00.373 adrfam: ipv4 00:26:00.373 subtype: nvme subsystem 00:26:00.373 treq: not specified, sq flow control disable supported 00:26:00.373 portid: 1 00:26:00.373 trsvcid: 4420 00:26:00.373 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:00.373 traddr: 10.0.0.1 00:26:00.373 eflags: none 00:26:00.373 sectype: none 00:26:00.373 16:08:39 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:00.373 16:08:39 -- host/auth.sh@37 -- # echo 0 00:26:00.373 16:08:39 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:00.373 16:08:39 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.373 16:08:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:00.373 16:08:39 -- host/auth.sh@44 -- # digest=sha256 00:26:00.373 16:08:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.373 16:08:39 -- host/auth.sh@44 -- # keyid=1 00:26:00.373 16:08:39 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:00.373 16:08:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:00.373 16:08:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:00.373 16:08:39 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:00.373 16:08:39 -- host/auth.sh@100 -- # IFS=, 00:26:00.373 16:08:39 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:26:00.373 16:08:39 -- host/auth.sh@100 -- # IFS=, 00:26:00.373 16:08:39 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.373 16:08:39 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:00.373 16:08:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:00.373 16:08:39 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:26:00.373 16:08:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.373 16:08:39 -- host/auth.sh@68 -- # keyid=1 00:26:00.373 16:08:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.373 16:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.373 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:26:00.373 16:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.373 16:08:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:00.373 16:08:39 -- nvmf/common.sh@717 -- # local ip 00:26:00.373 16:08:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:00.373 16:08:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:00.373 16:08:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.373 16:08:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.373 16:08:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:00.373 16:08:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.373 16:08:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:00.373 16:08:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:00.373 16:08:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:00.373 16:08:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:00.373 16:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.373 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:26:00.373 nvme0n1 00:26:00.373 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.373 16:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.373 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.373 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.373 16:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:00.373 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.633 16:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.633 16:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.633 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.633 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.633 16:08:40 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:00.633 16:08:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.633 16:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:00.633 16:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:00.633 16:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:00.633 16:08:40 -- host/auth.sh@44 -- # digest=sha256 00:26:00.633 16:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.633 16:08:40 -- host/auth.sh@44 -- # keyid=0 00:26:00.633 16:08:40 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:00.633 16:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:00.633 16:08:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:00.633 16:08:40 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:00.633 16:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:26:00.633 16:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:00.633 16:08:40 -- host/auth.sh@68 -- # digest=sha256 00:26:00.633 16:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:00.633 16:08:40 -- host/auth.sh@68 -- # keyid=0 00:26:00.633 16:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.633 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.633 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.633 16:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:00.633 16:08:40 -- nvmf/common.sh@717 -- # local ip 00:26:00.633 16:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:00.633 16:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:00.633 16:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.633 16:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.633 16:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:00.633 16:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.633 16:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:00.633 16:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:00.633 16:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:00.633 16:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:00.633 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.633 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 nvme0n1 00:26:00.633 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.633 16:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.633 16:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:00.633 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.633 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.633 16:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.633 16:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.633 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.633 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.633 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.633 16:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:00.633 16:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.633 16:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:00.633 16:08:40 -- host/auth.sh@44 -- # digest=sha256 00:26:00.633 16:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.634 16:08:40 -- host/auth.sh@44 -- # keyid=1 00:26:00.634 16:08:40 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:00.634 16:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:00.634 16:08:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:00.634 16:08:40 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:00.634 16:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:26:00.634 16:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:00.634 16:08:40 -- host/auth.sh@68 -- # digest=sha256 00:26:00.634 16:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:00.634 16:08:40 -- host/auth.sh@68 -- # keyid=1 00:26:00.634 16:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.634 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.634 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.892 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.892 16:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:00.892 16:08:40 -- nvmf/common.sh@717 -- # local ip 00:26:00.892 16:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:00.892 16:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:00.892 16:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.892 16:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.892 16:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:00.892 16:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.892 16:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:00.892 16:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:00.892 16:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:00.892 16:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:00.892 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.892 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.892 nvme0n1 00:26:00.892 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.892 16:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.892 16:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:00.892 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.892 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.892 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.892 16:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.892 16:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.892 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.892 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.892 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.892 16:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:00.892 16:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:00.892 16:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:00.892 16:08:40 -- host/auth.sh@44 -- # digest=sha256 00:26:00.892 16:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.892 16:08:40 -- host/auth.sh@44 -- # keyid=2 00:26:00.892 16:08:40 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:00.892 16:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:00.892 16:08:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:00.892 16:08:40 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:00.892 16:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:26:00.892 16:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:00.892 16:08:40 -- host/auth.sh@68 -- # digest=sha256 00:26:00.892 16:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:00.892 16:08:40 -- host/auth.sh@68 -- # keyid=2 00:26:00.892 16:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.892 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.892 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:00.892 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.892 16:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:00.892 16:08:40 -- nvmf/common.sh@717 -- # local ip 00:26:00.893 16:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:00.893 16:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:00.893 16:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.893 16:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.893 16:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:00.893 16:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.893 16:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:00.893 16:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:00.893 16:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:00.893 16:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:00.893 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.893 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.151 nvme0n1 00:26:01.151 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.151 16:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.151 16:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:01.151 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.151 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.151 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.151 16:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.151 16:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.151 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.151 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.151 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.151 16:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:01.151 16:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:01.152 16:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:01.152 16:08:40 -- host/auth.sh@44 -- # digest=sha256 00:26:01.152 16:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.152 16:08:40 -- host/auth.sh@44 -- # keyid=3 00:26:01.152 16:08:40 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:01.152 16:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:01.152 16:08:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:01.152 16:08:40 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:01.152 16:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:26:01.152 16:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:01.152 16:08:40 -- host/auth.sh@68 -- # digest=sha256 00:26:01.152 16:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:01.152 16:08:40 -- host/auth.sh@68 -- # keyid=3 00:26:01.152 16:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.152 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.152 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.152 16:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:01.152 16:08:40 -- nvmf/common.sh@717 -- # local ip 00:26:01.152 16:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:01.152 16:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:01.152 16:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.152 16:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.152 16:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:01.152 16:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.152 16:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:01.152 16:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:01.152 16:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:01.152 16:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:01.152 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.152 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.410 nvme0n1 00:26:01.410 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.410 16:08:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:01.410 16:08:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.410 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.410 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.410 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.411 16:08:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.411 16:08:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.411 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.411 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.411 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.411 16:08:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:01.411 16:08:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:01.411 16:08:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:01.411 16:08:40 -- host/auth.sh@44 -- # digest=sha256 00:26:01.411 16:08:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.411 16:08:40 -- host/auth.sh@44 -- # keyid=4 00:26:01.411 16:08:40 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:01.411 16:08:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:01.411 16:08:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:01.411 16:08:40 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:01.411 16:08:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:26:01.411 16:08:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:01.411 16:08:40 -- host/auth.sh@68 -- # digest=sha256 00:26:01.411 16:08:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:01.411 16:08:40 -- host/auth.sh@68 -- # keyid=4 00:26:01.411 16:08:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.411 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.411 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.411 16:08:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.411 16:08:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:01.411 16:08:40 -- nvmf/common.sh@717 -- # local ip 00:26:01.411 16:08:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:01.411 16:08:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:01.411 16:08:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.411 16:08:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.411 16:08:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:01.411 16:08:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.411 16:08:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:01.411 16:08:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:01.411 16:08:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:01.411 16:08:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.411 16:08:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.411 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.670 nvme0n1 00:26:01.670 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.670 16:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:01.670 16:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.670 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.670 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:01.670 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.670 16:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.670 16:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.670 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.670 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:01.670 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.670 16:08:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.670 16:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:01.670 16:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:01.670 16:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:01.670 16:08:41 -- host/auth.sh@44 -- # digest=sha256 00:26:01.670 16:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.670 16:08:41 -- host/auth.sh@44 -- # keyid=0 00:26:01.670 16:08:41 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:01.670 16:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:01.670 16:08:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:01.670 16:08:41 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:01.670 16:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:26:01.670 16:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:01.670 16:08:41 -- host/auth.sh@68 -- # digest=sha256 00:26:01.670 16:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:01.670 16:08:41 -- host/auth.sh@68 -- # keyid=0 00:26:01.670 16:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.670 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.670 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:01.670 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.670 16:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:01.670 16:08:41 -- nvmf/common.sh@717 -- # local ip 00:26:01.670 16:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:01.670 16:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:01.670 16:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.670 16:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.670 16:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:01.670 16:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.670 16:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:01.670 16:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:01.670 16:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:01.670 16:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:01.670 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.670 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:01.929 nvme0n1 00:26:01.929 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.929 16:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.929 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.929 16:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:01.929 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:01.929 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.929 16:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.929 16:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.929 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.929 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:01.929 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.929 16:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:01.929 16:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:01.929 16:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:01.929 16:08:41 -- host/auth.sh@44 -- # digest=sha256 00:26:01.929 16:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.929 16:08:41 -- host/auth.sh@44 -- # keyid=1 00:26:01.929 16:08:41 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:01.929 16:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:01.929 16:08:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:01.929 16:08:41 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:01.929 16:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:26:01.929 16:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:01.929 16:08:41 -- host/auth.sh@68 -- # digest=sha256 00:26:01.929 16:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:01.929 16:08:41 -- host/auth.sh@68 -- # keyid=1 00:26:01.929 16:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.929 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.929 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:01.929 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.929 16:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:01.929 16:08:41 -- nvmf/common.sh@717 -- # local ip 00:26:01.929 16:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:01.929 16:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:01.929 16:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.929 16:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.929 16:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:01.929 16:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.929 16:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:01.929 16:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:01.929 16:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:01.929 16:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:01.929 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.929 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.188 nvme0n1 00:26:02.188 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.188 16:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.188 16:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:02.188 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.188 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.188 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.188 16:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.188 16:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.188 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.188 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.188 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.188 16:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:02.188 16:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:02.188 16:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:02.188 16:08:41 -- host/auth.sh@44 -- # digest=sha256 00:26:02.188 16:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.188 16:08:41 -- host/auth.sh@44 -- # keyid=2 00:26:02.188 16:08:41 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:02.188 16:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:02.188 16:08:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:02.188 16:08:41 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:02.189 16:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:26:02.189 16:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:02.189 16:08:41 -- host/auth.sh@68 -- # digest=sha256 00:26:02.189 16:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:02.189 16:08:41 -- host/auth.sh@68 -- # keyid=2 00:26:02.189 16:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.189 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.189 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.189 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.189 16:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:02.189 16:08:41 -- nvmf/common.sh@717 -- # local ip 00:26:02.189 16:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:02.189 16:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:02.189 16:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.189 16:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.189 16:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:02.189 16:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.189 16:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:02.189 16:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:02.189 16:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:02.189 16:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:02.189 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.189 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.189 nvme0n1 00:26:02.448 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.448 16:08:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.448 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.448 16:08:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:02.448 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.448 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.448 16:08:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.448 16:08:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.448 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.448 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.448 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.448 16:08:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:02.448 16:08:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:02.448 16:08:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:02.448 16:08:41 -- host/auth.sh@44 -- # digest=sha256 00:26:02.448 16:08:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.448 16:08:41 -- host/auth.sh@44 -- # keyid=3 00:26:02.448 16:08:41 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:02.448 16:08:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:02.448 16:08:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:02.448 16:08:41 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:02.448 16:08:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:26:02.448 16:08:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:02.448 16:08:41 -- host/auth.sh@68 -- # digest=sha256 00:26:02.448 16:08:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:02.448 16:08:41 -- host/auth.sh@68 -- # keyid=3 00:26:02.448 16:08:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.448 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.448 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.448 16:08:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.448 16:08:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:02.448 16:08:41 -- nvmf/common.sh@717 -- # local ip 00:26:02.448 16:08:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:02.448 16:08:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:02.448 16:08:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.448 16:08:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.448 16:08:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:02.448 16:08:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.448 16:08:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:02.448 16:08:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:02.448 16:08:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:02.448 16:08:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:02.448 16:08:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.448 16:08:41 -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 nvme0n1 00:26:02.707 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.707 16:08:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:02.707 16:08:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.707 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.707 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.707 16:08:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.707 16:08:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.707 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.707 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.707 16:08:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:02.707 16:08:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:02.707 16:08:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:02.707 16:08:42 -- host/auth.sh@44 -- # digest=sha256 00:26:02.707 16:08:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.707 16:08:42 -- host/auth.sh@44 -- # keyid=4 00:26:02.707 16:08:42 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:02.707 16:08:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:02.707 16:08:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:02.707 16:08:42 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:02.707 16:08:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:26:02.707 16:08:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:02.707 16:08:42 -- host/auth.sh@68 -- # digest=sha256 00:26:02.707 16:08:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:02.707 16:08:42 -- host/auth.sh@68 -- # keyid=4 00:26:02.707 16:08:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.707 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.707 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.707 16:08:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:02.707 16:08:42 -- nvmf/common.sh@717 -- # local ip 00:26:02.707 16:08:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:02.707 16:08:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:02.707 16:08:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.707 16:08:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.707 16:08:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:02.707 16:08:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.707 16:08:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:02.707 16:08:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:02.707 16:08:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:02.707 16:08:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:02.707 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.707 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:02.707 nvme0n1 00:26:02.707 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.966 16:08:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.966 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.966 16:08:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:02.966 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:02.966 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.966 16:08:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.966 16:08:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.966 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.966 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:02.966 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.966 16:08:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.967 16:08:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:02.967 16:08:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:02.967 16:08:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:02.967 16:08:42 -- host/auth.sh@44 -- # digest=sha256 00:26:02.967 16:08:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.967 16:08:42 -- host/auth.sh@44 -- # keyid=0 00:26:02.967 16:08:42 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:02.967 16:08:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:02.967 16:08:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:02.967 16:08:42 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:02.967 16:08:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:26:02.967 16:08:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:02.967 16:08:42 -- host/auth.sh@68 -- # digest=sha256 00:26:02.967 16:08:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:02.967 16:08:42 -- host/auth.sh@68 -- # keyid=0 00:26:02.967 16:08:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.967 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.967 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:02.967 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:02.967 16:08:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:02.967 16:08:42 -- nvmf/common.sh@717 -- # local ip 00:26:02.967 16:08:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:02.967 16:08:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:02.967 16:08:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.967 16:08:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.967 16:08:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:02.967 16:08:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.967 16:08:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:02.967 16:08:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:02.967 16:08:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:02.967 16:08:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:02.967 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:02.967 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:03.226 nvme0n1 00:26:03.226 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.226 16:08:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.226 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.226 16:08:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:03.226 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:03.226 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.226 16:08:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.226 16:08:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.226 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.226 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:03.226 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.226 16:08:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:03.226 16:08:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:03.226 16:08:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:03.226 16:08:42 -- host/auth.sh@44 -- # digest=sha256 00:26:03.226 16:08:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.226 16:08:42 -- host/auth.sh@44 -- # keyid=1 00:26:03.226 16:08:42 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:03.226 16:08:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:03.226 16:08:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:03.226 16:08:42 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:03.226 16:08:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:26:03.226 16:08:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:03.226 16:08:42 -- host/auth.sh@68 -- # digest=sha256 00:26:03.226 16:08:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:03.226 16:08:42 -- host/auth.sh@68 -- # keyid=1 00:26:03.226 16:08:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.226 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.226 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:03.226 16:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.226 16:08:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:03.226 16:08:42 -- nvmf/common.sh@717 -- # local ip 00:26:03.226 16:08:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:03.226 16:08:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:03.226 16:08:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.226 16:08:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.226 16:08:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:03.226 16:08:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.226 16:08:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:03.226 16:08:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:03.226 16:08:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:03.226 16:08:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:03.226 16:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.226 16:08:42 -- common/autotest_common.sh@10 -- # set +x 00:26:03.509 nvme0n1 00:26:03.509 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.509 16:08:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.509 16:08:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:03.509 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.509 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:03.509 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.509 16:08:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.509 16:08:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.509 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.509 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:03.509 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.509 16:08:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:03.509 16:08:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:03.509 16:08:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:03.509 16:08:43 -- host/auth.sh@44 -- # digest=sha256 00:26:03.509 16:08:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.509 16:08:43 -- host/auth.sh@44 -- # keyid=2 00:26:03.509 16:08:43 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:03.509 16:08:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:03.509 16:08:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:03.509 16:08:43 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:03.509 16:08:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:26:03.509 16:08:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:03.509 16:08:43 -- host/auth.sh@68 -- # digest=sha256 00:26:03.509 16:08:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:03.509 16:08:43 -- host/auth.sh@68 -- # keyid=2 00:26:03.509 16:08:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.509 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.509 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:03.509 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.509 16:08:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:03.509 16:08:43 -- nvmf/common.sh@717 -- # local ip 00:26:03.509 16:08:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:03.509 16:08:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:03.509 16:08:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.509 16:08:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.509 16:08:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:03.509 16:08:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.509 16:08:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:03.509 16:08:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:03.509 16:08:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:03.509 16:08:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:03.509 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.509 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:03.779 nvme0n1 00:26:03.780 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.780 16:08:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.780 16:08:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:03.780 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.780 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:03.780 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.780 16:08:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.780 16:08:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.780 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.780 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:03.780 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.780 16:08:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:03.780 16:08:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:03.780 16:08:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:03.780 16:08:43 -- host/auth.sh@44 -- # digest=sha256 00:26:03.780 16:08:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.780 16:08:43 -- host/auth.sh@44 -- # keyid=3 00:26:03.780 16:08:43 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:03.780 16:08:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:03.780 16:08:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:03.780 16:08:43 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:03.780 16:08:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:26:03.780 16:08:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:03.780 16:08:43 -- host/auth.sh@68 -- # digest=sha256 00:26:03.780 16:08:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:03.780 16:08:43 -- host/auth.sh@68 -- # keyid=3 00:26:03.780 16:08:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.780 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.780 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:03.780 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.780 16:08:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:03.780 16:08:43 -- nvmf/common.sh@717 -- # local ip 00:26:03.780 16:08:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:03.780 16:08:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:03.780 16:08:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.780 16:08:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.780 16:08:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:03.780 16:08:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.780 16:08:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:03.780 16:08:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:03.780 16:08:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:03.780 16:08:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:03.780 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.780 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:04.045 nvme0n1 00:26:04.045 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.045 16:08:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.045 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.045 16:08:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:04.045 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:04.045 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.046 16:08:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.046 16:08:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.046 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.304 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:04.304 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.304 16:08:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:04.304 16:08:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:04.304 16:08:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:04.304 16:08:43 -- host/auth.sh@44 -- # digest=sha256 00:26:04.304 16:08:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.304 16:08:43 -- host/auth.sh@44 -- # keyid=4 00:26:04.304 16:08:43 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:04.304 16:08:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:04.304 16:08:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:04.304 16:08:43 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:04.304 16:08:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:26:04.304 16:08:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:04.304 16:08:43 -- host/auth.sh@68 -- # digest=sha256 00:26:04.304 16:08:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:04.304 16:08:43 -- host/auth.sh@68 -- # keyid=4 00:26:04.304 16:08:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.304 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.304 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:04.304 16:08:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.304 16:08:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:04.304 16:08:43 -- nvmf/common.sh@717 -- # local ip 00:26:04.304 16:08:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:04.304 16:08:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:04.304 16:08:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.304 16:08:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.304 16:08:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:04.304 16:08:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.305 16:08:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:04.305 16:08:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:04.305 16:08:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:04.305 16:08:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:04.305 16:08:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.305 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:04.563 nvme0n1 00:26:04.563 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.563 16:08:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.563 16:08:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:04.563 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.563 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:04.563 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.563 16:08:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.563 16:08:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.563 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.563 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:04.563 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.563 16:08:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.563 16:08:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:04.563 16:08:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:04.563 16:08:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:04.563 16:08:44 -- host/auth.sh@44 -- # digest=sha256 00:26:04.563 16:08:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.563 16:08:44 -- host/auth.sh@44 -- # keyid=0 00:26:04.563 16:08:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:04.563 16:08:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:04.563 16:08:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:04.563 16:08:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:04.563 16:08:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:26:04.563 16:08:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:04.563 16:08:44 -- host/auth.sh@68 -- # digest=sha256 00:26:04.563 16:08:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:04.563 16:08:44 -- host/auth.sh@68 -- # keyid=0 00:26:04.563 16:08:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.563 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.563 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:04.563 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.563 16:08:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:04.563 16:08:44 -- nvmf/common.sh@717 -- # local ip 00:26:04.563 16:08:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:04.563 16:08:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:04.563 16:08:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.563 16:08:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.563 16:08:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:04.563 16:08:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.563 16:08:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:04.563 16:08:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:04.563 16:08:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:04.563 16:08:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:04.563 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.563 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:04.822 nvme0n1 00:26:04.822 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.822 16:08:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:04.822 16:08:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.822 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.822 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:04.822 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.823 16:08:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.823 16:08:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.823 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.823 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.082 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.082 16:08:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:05.082 16:08:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:05.082 16:08:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:05.082 16:08:44 -- host/auth.sh@44 -- # digest=sha256 00:26:05.082 16:08:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.082 16:08:44 -- host/auth.sh@44 -- # keyid=1 00:26:05.082 16:08:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:05.082 16:08:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:05.082 16:08:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:05.082 16:08:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:05.082 16:08:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:26:05.082 16:08:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:05.082 16:08:44 -- host/auth.sh@68 -- # digest=sha256 00:26:05.082 16:08:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:05.082 16:08:44 -- host/auth.sh@68 -- # keyid=1 00:26:05.082 16:08:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.082 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.082 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.082 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.082 16:08:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:05.082 16:08:44 -- nvmf/common.sh@717 -- # local ip 00:26:05.082 16:08:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:05.082 16:08:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:05.082 16:08:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.082 16:08:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.082 16:08:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:05.082 16:08:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.082 16:08:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:05.082 16:08:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:05.082 16:08:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:05.082 16:08:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:05.082 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.082 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.341 nvme0n1 00:26:05.341 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.341 16:08:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.341 16:08:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:05.341 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.341 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.341 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.341 16:08:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.341 16:08:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.341 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.341 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.341 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.341 16:08:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:05.341 16:08:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:05.341 16:08:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:05.341 16:08:44 -- host/auth.sh@44 -- # digest=sha256 00:26:05.341 16:08:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.341 16:08:44 -- host/auth.sh@44 -- # keyid=2 00:26:05.341 16:08:44 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:05.341 16:08:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:05.341 16:08:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:05.341 16:08:44 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:05.341 16:08:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:26:05.341 16:08:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:05.341 16:08:44 -- host/auth.sh@68 -- # digest=sha256 00:26:05.341 16:08:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:05.341 16:08:44 -- host/auth.sh@68 -- # keyid=2 00:26:05.341 16:08:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.341 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.341 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.341 16:08:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.341 16:08:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:05.341 16:08:44 -- nvmf/common.sh@717 -- # local ip 00:26:05.341 16:08:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:05.341 16:08:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:05.341 16:08:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.341 16:08:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.341 16:08:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:05.341 16:08:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.341 16:08:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:05.341 16:08:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:05.341 16:08:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:05.341 16:08:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:05.341 16:08:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.342 16:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 nvme0n1 00:26:05.910 16:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.910 16:08:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.910 16:08:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:05.910 16:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.910 16:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 16:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.910 16:08:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.910 16:08:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.910 16:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.910 16:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 16:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.910 16:08:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:05.910 16:08:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:05.910 16:08:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:05.910 16:08:45 -- host/auth.sh@44 -- # digest=sha256 00:26:05.910 16:08:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.910 16:08:45 -- host/auth.sh@44 -- # keyid=3 00:26:05.910 16:08:45 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:05.910 16:08:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:05.910 16:08:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:05.910 16:08:45 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:05.910 16:08:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:26:05.910 16:08:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:05.910 16:08:45 -- host/auth.sh@68 -- # digest=sha256 00:26:05.910 16:08:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:05.910 16:08:45 -- host/auth.sh@68 -- # keyid=3 00:26:05.910 16:08:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.910 16:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.910 16:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:05.910 16:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:05.910 16:08:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:05.910 16:08:45 -- nvmf/common.sh@717 -- # local ip 00:26:05.910 16:08:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:05.910 16:08:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:05.910 16:08:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.910 16:08:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.910 16:08:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:05.910 16:08:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.910 16:08:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:05.910 16:08:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:05.910 16:08:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:05.910 16:08:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:05.910 16:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:05.910 16:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:06.169 nvme0n1 00:26:06.169 16:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.170 16:08:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:06.170 16:08:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.170 16:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.170 16:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:06.428 16:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.428 16:08:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.428 16:08:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.428 16:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.428 16:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:06.428 16:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.428 16:08:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:06.428 16:08:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:06.428 16:08:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:06.428 16:08:45 -- host/auth.sh@44 -- # digest=sha256 00:26:06.428 16:08:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.428 16:08:45 -- host/auth.sh@44 -- # keyid=4 00:26:06.428 16:08:45 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:06.428 16:08:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:06.429 16:08:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:06.429 16:08:45 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:06.429 16:08:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:26:06.429 16:08:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:06.429 16:08:45 -- host/auth.sh@68 -- # digest=sha256 00:26:06.429 16:08:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:06.429 16:08:45 -- host/auth.sh@68 -- # keyid=4 00:26:06.429 16:08:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:06.429 16:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.429 16:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:06.429 16:08:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.429 16:08:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:06.429 16:08:45 -- nvmf/common.sh@717 -- # local ip 00:26:06.429 16:08:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:06.429 16:08:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:06.429 16:08:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.429 16:08:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.429 16:08:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:06.429 16:08:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.429 16:08:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:06.429 16:08:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:06.429 16:08:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:06.429 16:08:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.429 16:08:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.429 16:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:06.688 nvme0n1 00:26:06.688 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.688 16:08:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:06.688 16:08:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.688 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.688 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:06.688 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.688 16:08:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.688 16:08:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.688 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.688 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:06.688 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.688 16:08:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.688 16:08:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:06.688 16:08:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:06.688 16:08:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:06.688 16:08:46 -- host/auth.sh@44 -- # digest=sha256 00:26:06.688 16:08:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.688 16:08:46 -- host/auth.sh@44 -- # keyid=0 00:26:06.688 16:08:46 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:06.688 16:08:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:06.688 16:08:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:06.688 16:08:46 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:06.688 16:08:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:26:06.688 16:08:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:06.688 16:08:46 -- host/auth.sh@68 -- # digest=sha256 00:26:06.688 16:08:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:06.688 16:08:46 -- host/auth.sh@68 -- # keyid=0 00:26:06.688 16:08:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.688 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.688 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:06.688 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.688 16:08:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:06.688 16:08:46 -- nvmf/common.sh@717 -- # local ip 00:26:06.688 16:08:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:06.688 16:08:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:06.688 16:08:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.688 16:08:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.688 16:08:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:06.688 16:08:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.688 16:08:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:06.688 16:08:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:06.688 16:08:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:06.688 16:08:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:06.688 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.688 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:07.257 nvme0n1 00:26:07.257 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.257 16:08:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.257 16:08:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:07.257 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.257 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:07.257 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.516 16:08:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.516 16:08:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.516 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.516 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:07.516 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.516 16:08:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:07.516 16:08:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:07.516 16:08:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:07.516 16:08:46 -- host/auth.sh@44 -- # digest=sha256 00:26:07.516 16:08:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.516 16:08:46 -- host/auth.sh@44 -- # keyid=1 00:26:07.516 16:08:46 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:07.516 16:08:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:07.516 16:08:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:07.516 16:08:46 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:07.516 16:08:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:26:07.516 16:08:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:07.516 16:08:46 -- host/auth.sh@68 -- # digest=sha256 00:26:07.516 16:08:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:07.516 16:08:46 -- host/auth.sh@68 -- # keyid=1 00:26:07.516 16:08:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.516 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.516 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:07.516 16:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.516 16:08:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:07.516 16:08:46 -- nvmf/common.sh@717 -- # local ip 00:26:07.516 16:08:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:07.516 16:08:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:07.516 16:08:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.516 16:08:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.516 16:08:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:07.516 16:08:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.516 16:08:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:07.516 16:08:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:07.516 16:08:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:07.516 16:08:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:07.516 16:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.516 16:08:46 -- common/autotest_common.sh@10 -- # set +x 00:26:08.084 nvme0n1 00:26:08.084 16:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.084 16:08:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.084 16:08:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:08.084 16:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.084 16:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:08.084 16:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.084 16:08:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.084 16:08:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.084 16:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.084 16:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:08.084 16:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.084 16:08:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:08.084 16:08:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:08.084 16:08:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:08.084 16:08:47 -- host/auth.sh@44 -- # digest=sha256 00:26:08.084 16:08:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.084 16:08:47 -- host/auth.sh@44 -- # keyid=2 00:26:08.084 16:08:47 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:08.085 16:08:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:08.085 16:08:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:08.085 16:08:47 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:08.085 16:08:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:26:08.085 16:08:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:08.085 16:08:47 -- host/auth.sh@68 -- # digest=sha256 00:26:08.085 16:08:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:08.085 16:08:47 -- host/auth.sh@68 -- # keyid=2 00:26:08.085 16:08:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.085 16:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.085 16:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:08.085 16:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.085 16:08:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:08.085 16:08:47 -- nvmf/common.sh@717 -- # local ip 00:26:08.085 16:08:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:08.085 16:08:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:08.085 16:08:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.085 16:08:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.085 16:08:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:08.085 16:08:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.085 16:08:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:08.085 16:08:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:08.085 16:08:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:08.085 16:08:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:08.085 16:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.085 16:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:08.653 nvme0n1 00:26:08.653 16:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.653 16:08:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.653 16:08:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:08.653 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.653 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:08.653 16:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.653 16:08:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.653 16:08:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.653 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.653 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:08.653 16:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.653 16:08:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:08.653 16:08:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:08.653 16:08:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:08.653 16:08:48 -- host/auth.sh@44 -- # digest=sha256 00:26:08.653 16:08:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.653 16:08:48 -- host/auth.sh@44 -- # keyid=3 00:26:08.654 16:08:48 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:08.654 16:08:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:08.654 16:08:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:08.654 16:08:48 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:08.654 16:08:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:26:08.654 16:08:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:08.654 16:08:48 -- host/auth.sh@68 -- # digest=sha256 00:26:08.654 16:08:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:08.654 16:08:48 -- host/auth.sh@68 -- # keyid=3 00:26:08.654 16:08:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.654 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.654 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:08.654 16:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.654 16:08:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:08.654 16:08:48 -- nvmf/common.sh@717 -- # local ip 00:26:08.654 16:08:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:08.654 16:08:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:08.654 16:08:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.654 16:08:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.654 16:08:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:08.654 16:08:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.654 16:08:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:08.654 16:08:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:08.654 16:08:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:08.654 16:08:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:08.654 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.654 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.222 nvme0n1 00:26:09.222 16:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.222 16:08:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.222 16:08:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:09.222 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.222 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.222 16:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.222 16:08:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.222 16:08:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.222 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.222 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.481 16:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.481 16:08:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:09.481 16:08:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:09.481 16:08:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:09.481 16:08:48 -- host/auth.sh@44 -- # digest=sha256 00:26:09.481 16:08:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.481 16:08:48 -- host/auth.sh@44 -- # keyid=4 00:26:09.481 16:08:48 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:09.481 16:08:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:09.481 16:08:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:09.481 16:08:48 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:09.481 16:08:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:26:09.481 16:08:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:09.481 16:08:48 -- host/auth.sh@68 -- # digest=sha256 00:26:09.481 16:08:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:09.481 16:08:48 -- host/auth.sh@68 -- # keyid=4 00:26:09.481 16:08:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.481 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.481 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:09.481 16:08:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.481 16:08:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:09.481 16:08:48 -- nvmf/common.sh@717 -- # local ip 00:26:09.481 16:08:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:09.481 16:08:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:09.481 16:08:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.481 16:08:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.481 16:08:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:09.481 16:08:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.481 16:08:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:09.481 16:08:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:09.481 16:08:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:09.481 16:08:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.481 16:08:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.481 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:26:10.049 nvme0n1 00:26:10.049 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.049 16:08:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.049 16:08:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:10.049 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.049 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.049 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.049 16:08:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.049 16:08:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.049 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.049 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.049 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.049 16:08:49 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:10.049 16:08:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.049 16:08:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:10.049 16:08:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:10.049 16:08:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:10.049 16:08:49 -- host/auth.sh@44 -- # digest=sha384 00:26:10.049 16:08:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.049 16:08:49 -- host/auth.sh@44 -- # keyid=0 00:26:10.049 16:08:49 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:10.049 16:08:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:10.049 16:08:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:10.049 16:08:49 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:10.049 16:08:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:26:10.049 16:08:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:10.049 16:08:49 -- host/auth.sh@68 -- # digest=sha384 00:26:10.049 16:08:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:10.049 16:08:49 -- host/auth.sh@68 -- # keyid=0 00:26:10.049 16:08:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.049 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.049 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.049 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.049 16:08:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:10.049 16:08:49 -- nvmf/common.sh@717 -- # local ip 00:26:10.049 16:08:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:10.049 16:08:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:10.049 16:08:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.049 16:08:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.049 16:08:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:10.049 16:08:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.049 16:08:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:10.049 16:08:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:10.049 16:08:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:10.049 16:08:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:10.049 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.049 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.049 nvme0n1 00:26:10.049 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.049 16:08:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.049 16:08:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:10.049 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.049 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.308 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.308 16:08:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.308 16:08:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.308 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.308 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.308 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.308 16:08:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:10.308 16:08:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:10.308 16:08:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:10.308 16:08:49 -- host/auth.sh@44 -- # digest=sha384 00:26:10.308 16:08:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.308 16:08:49 -- host/auth.sh@44 -- # keyid=1 00:26:10.308 16:08:49 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:10.308 16:08:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:10.308 16:08:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:10.308 16:08:49 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:10.308 16:08:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:26:10.308 16:08:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:10.308 16:08:49 -- host/auth.sh@68 -- # digest=sha384 00:26:10.308 16:08:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:10.308 16:08:49 -- host/auth.sh@68 -- # keyid=1 00:26:10.308 16:08:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.308 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.308 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.308 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.308 16:08:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:10.308 16:08:49 -- nvmf/common.sh@717 -- # local ip 00:26:10.308 16:08:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:10.308 16:08:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:10.308 16:08:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.308 16:08:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.308 16:08:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:10.308 16:08:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.308 16:08:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:10.308 16:08:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:10.308 16:08:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:10.308 16:08:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:10.308 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.308 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.308 nvme0n1 00:26:10.308 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.308 16:08:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.308 16:08:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:10.308 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.308 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.308 16:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.308 16:08:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.308 16:08:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.308 16:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.308 16:08:49 -- common/autotest_common.sh@10 -- # set +x 00:26:10.568 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.568 16:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:10.568 16:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:10.568 16:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:10.568 16:08:50 -- host/auth.sh@44 -- # digest=sha384 00:26:10.568 16:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.568 16:08:50 -- host/auth.sh@44 -- # keyid=2 00:26:10.568 16:08:50 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:10.568 16:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:10.568 16:08:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:10.568 16:08:50 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:10.568 16:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:26:10.568 16:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:10.568 16:08:50 -- host/auth.sh@68 -- # digest=sha384 00:26:10.568 16:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:10.568 16:08:50 -- host/auth.sh@68 -- # keyid=2 00:26:10.568 16:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.568 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.568 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.568 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.568 16:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:10.568 16:08:50 -- nvmf/common.sh@717 -- # local ip 00:26:10.568 16:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:10.568 16:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:10.568 16:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.568 16:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.568 16:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:10.568 16:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.568 16:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:10.568 16:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:10.568 16:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:10.568 16:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:10.568 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.568 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.568 nvme0n1 00:26:10.568 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.568 16:08:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.568 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.568 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.568 16:08:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:10.568 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.568 16:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.568 16:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.568 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.568 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.568 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.568 16:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:10.568 16:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:10.568 16:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:10.568 16:08:50 -- host/auth.sh@44 -- # digest=sha384 00:26:10.568 16:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.568 16:08:50 -- host/auth.sh@44 -- # keyid=3 00:26:10.568 16:08:50 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:10.568 16:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:10.568 16:08:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:10.568 16:08:50 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:10.568 16:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:26:10.568 16:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:10.568 16:08:50 -- host/auth.sh@68 -- # digest=sha384 00:26:10.568 16:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:10.568 16:08:50 -- host/auth.sh@68 -- # keyid=3 00:26:10.568 16:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.568 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.568 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.568 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.568 16:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:10.568 16:08:50 -- nvmf/common.sh@717 -- # local ip 00:26:10.568 16:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:10.568 16:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:10.568 16:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.568 16:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.568 16:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:10.568 16:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.568 16:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:10.568 16:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:10.568 16:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:10.568 16:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:10.568 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.568 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.828 nvme0n1 00:26:10.828 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.828 16:08:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.828 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.828 16:08:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:10.828 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.828 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.828 16:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.828 16:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.828 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.828 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.828 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.828 16:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:10.828 16:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:10.828 16:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:10.828 16:08:50 -- host/auth.sh@44 -- # digest=sha384 00:26:10.828 16:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:10.828 16:08:50 -- host/auth.sh@44 -- # keyid=4 00:26:10.828 16:08:50 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:10.828 16:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:10.828 16:08:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:10.828 16:08:50 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:10.828 16:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:26:10.828 16:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:10.828 16:08:50 -- host/auth.sh@68 -- # digest=sha384 00:26:10.828 16:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:10.828 16:08:50 -- host/auth.sh@68 -- # keyid=4 00:26:10.828 16:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:10.828 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.828 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.828 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.828 16:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:10.828 16:08:50 -- nvmf/common.sh@717 -- # local ip 00:26:10.828 16:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:10.828 16:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:10.828 16:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.828 16:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.828 16:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:10.828 16:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.828 16:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:10.828 16:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:10.828 16:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:10.828 16:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.828 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.828 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.087 nvme0n1 00:26:11.087 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.087 16:08:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.087 16:08:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:11.087 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.087 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.087 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.087 16:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.087 16:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.087 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.087 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.087 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.087 16:08:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.087 16:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:11.087 16:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:11.087 16:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:11.087 16:08:50 -- host/auth.sh@44 -- # digest=sha384 00:26:11.087 16:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.087 16:08:50 -- host/auth.sh@44 -- # keyid=0 00:26:11.087 16:08:50 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:11.087 16:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:11.087 16:08:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:11.087 16:08:50 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:11.087 16:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:26:11.087 16:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:11.087 16:08:50 -- host/auth.sh@68 -- # digest=sha384 00:26:11.087 16:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:11.087 16:08:50 -- host/auth.sh@68 -- # keyid=0 00:26:11.087 16:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.087 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.087 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.087 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.087 16:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:11.087 16:08:50 -- nvmf/common.sh@717 -- # local ip 00:26:11.087 16:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:11.087 16:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:11.087 16:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.087 16:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.087 16:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:11.087 16:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.087 16:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:11.087 16:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:11.088 16:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:11.088 16:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:11.088 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.088 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.347 nvme0n1 00:26:11.347 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.347 16:08:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.347 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.347 16:08:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:11.347 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.347 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.347 16:08:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.347 16:08:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.347 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.347 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.347 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.347 16:08:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:11.347 16:08:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:11.347 16:08:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:11.347 16:08:50 -- host/auth.sh@44 -- # digest=sha384 00:26:11.347 16:08:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.347 16:08:50 -- host/auth.sh@44 -- # keyid=1 00:26:11.347 16:08:50 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:11.347 16:08:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:11.347 16:08:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:11.347 16:08:50 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:11.347 16:08:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:26:11.347 16:08:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:11.347 16:08:50 -- host/auth.sh@68 -- # digest=sha384 00:26:11.347 16:08:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:11.347 16:08:50 -- host/auth.sh@68 -- # keyid=1 00:26:11.347 16:08:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.347 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.347 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.347 16:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.347 16:08:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:11.347 16:08:50 -- nvmf/common.sh@717 -- # local ip 00:26:11.347 16:08:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:11.347 16:08:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:11.347 16:08:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.347 16:08:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.347 16:08:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:11.347 16:08:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.347 16:08:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:11.347 16:08:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:11.347 16:08:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:11.347 16:08:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:11.347 16:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.347 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:11.606 nvme0n1 00:26:11.606 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.606 16:08:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.606 16:08:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:11.606 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.606 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:11.606 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.606 16:08:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.606 16:08:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.606 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.606 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:11.606 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.606 16:08:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:11.606 16:08:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:11.606 16:08:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:11.606 16:08:51 -- host/auth.sh@44 -- # digest=sha384 00:26:11.606 16:08:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.606 16:08:51 -- host/auth.sh@44 -- # keyid=2 00:26:11.606 16:08:51 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:11.606 16:08:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:11.606 16:08:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:11.606 16:08:51 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:11.606 16:08:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:26:11.606 16:08:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:11.606 16:08:51 -- host/auth.sh@68 -- # digest=sha384 00:26:11.606 16:08:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:11.606 16:08:51 -- host/auth.sh@68 -- # keyid=2 00:26:11.606 16:08:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.606 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.606 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:11.606 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.606 16:08:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:11.606 16:08:51 -- nvmf/common.sh@717 -- # local ip 00:26:11.606 16:08:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:11.606 16:08:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:11.606 16:08:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.606 16:08:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.606 16:08:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:11.606 16:08:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.606 16:08:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:11.606 16:08:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:11.606 16:08:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:11.606 16:08:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:11.606 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.606 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:11.865 nvme0n1 00:26:11.865 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.865 16:08:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.865 16:08:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:11.865 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.865 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:11.865 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.865 16:08:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.865 16:08:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.865 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.866 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:11.866 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.866 16:08:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:11.866 16:08:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:11.866 16:08:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:11.866 16:08:51 -- host/auth.sh@44 -- # digest=sha384 00:26:11.866 16:08:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.866 16:08:51 -- host/auth.sh@44 -- # keyid=3 00:26:11.866 16:08:51 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:11.866 16:08:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:11.866 16:08:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:11.866 16:08:51 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:11.866 16:08:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:26:11.866 16:08:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:11.866 16:08:51 -- host/auth.sh@68 -- # digest=sha384 00:26:11.866 16:08:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:11.866 16:08:51 -- host/auth.sh@68 -- # keyid=3 00:26:11.866 16:08:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.866 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.866 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:11.866 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.866 16:08:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:11.866 16:08:51 -- nvmf/common.sh@717 -- # local ip 00:26:11.866 16:08:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:11.866 16:08:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:11.866 16:08:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.866 16:08:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.866 16:08:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:11.866 16:08:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.866 16:08:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:11.866 16:08:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:11.866 16:08:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:11.866 16:08:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:11.866 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.866 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.125 nvme0n1 00:26:12.125 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.125 16:08:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.125 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.125 16:08:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:12.125 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.125 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.125 16:08:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.125 16:08:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.125 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.125 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.125 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.125 16:08:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:12.125 16:08:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:12.125 16:08:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:12.125 16:08:51 -- host/auth.sh@44 -- # digest=sha384 00:26:12.125 16:08:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:12.125 16:08:51 -- host/auth.sh@44 -- # keyid=4 00:26:12.125 16:08:51 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:12.125 16:08:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:12.125 16:08:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:12.125 16:08:51 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:12.125 16:08:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:26:12.125 16:08:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:12.125 16:08:51 -- host/auth.sh@68 -- # digest=sha384 00:26:12.125 16:08:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:12.125 16:08:51 -- host/auth.sh@68 -- # keyid=4 00:26:12.125 16:08:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:12.125 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.125 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.125 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.125 16:08:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:12.125 16:08:51 -- nvmf/common.sh@717 -- # local ip 00:26:12.125 16:08:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:12.125 16:08:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:12.125 16:08:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.125 16:08:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.125 16:08:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:12.125 16:08:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.125 16:08:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:12.125 16:08:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:12.125 16:08:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:12.125 16:08:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.125 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.125 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.385 nvme0n1 00:26:12.385 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.385 16:08:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.385 16:08:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:12.385 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.385 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.385 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.385 16:08:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.385 16:08:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.385 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.385 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.385 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.385 16:08:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.385 16:08:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:12.385 16:08:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:12.385 16:08:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:12.385 16:08:51 -- host/auth.sh@44 -- # digest=sha384 00:26:12.385 16:08:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.385 16:08:51 -- host/auth.sh@44 -- # keyid=0 00:26:12.385 16:08:51 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:12.385 16:08:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:12.385 16:08:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:12.385 16:08:51 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:12.385 16:08:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:26:12.385 16:08:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:12.385 16:08:51 -- host/auth.sh@68 -- # digest=sha384 00:26:12.385 16:08:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:12.385 16:08:51 -- host/auth.sh@68 -- # keyid=0 00:26:12.385 16:08:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.385 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.385 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.385 16:08:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.385 16:08:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:12.385 16:08:51 -- nvmf/common.sh@717 -- # local ip 00:26:12.385 16:08:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:12.385 16:08:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:12.385 16:08:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.385 16:08:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.385 16:08:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:12.385 16:08:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.385 16:08:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:12.385 16:08:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:12.385 16:08:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:12.385 16:08:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:12.385 16:08:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.385 16:08:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.644 nvme0n1 00:26:12.644 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.644 16:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.644 16:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:12.645 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.645 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.645 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.645 16:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.645 16:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.645 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.645 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.645 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.645 16:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:12.645 16:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:12.645 16:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:12.645 16:08:52 -- host/auth.sh@44 -- # digest=sha384 00:26:12.645 16:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.645 16:08:52 -- host/auth.sh@44 -- # keyid=1 00:26:12.645 16:08:52 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:12.645 16:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:12.645 16:08:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:12.645 16:08:52 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:12.645 16:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:26:12.645 16:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:12.645 16:08:52 -- host/auth.sh@68 -- # digest=sha384 00:26:12.645 16:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:12.645 16:08:52 -- host/auth.sh@68 -- # keyid=1 00:26:12.645 16:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.645 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.645 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.645 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.645 16:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:12.645 16:08:52 -- nvmf/common.sh@717 -- # local ip 00:26:12.645 16:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:12.645 16:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:12.645 16:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.645 16:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.645 16:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:12.645 16:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.645 16:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:12.645 16:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:12.645 16:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:12.645 16:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:12.645 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.645 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.904 nvme0n1 00:26:12.904 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.904 16:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.904 16:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:12.904 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.904 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.904 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.163 16:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.163 16:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.163 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.163 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.163 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.163 16:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:13.163 16:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:13.163 16:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:13.163 16:08:52 -- host/auth.sh@44 -- # digest=sha384 00:26:13.163 16:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.163 16:08:52 -- host/auth.sh@44 -- # keyid=2 00:26:13.163 16:08:52 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:13.163 16:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:13.163 16:08:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:13.163 16:08:52 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:13.163 16:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:26:13.163 16:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:13.163 16:08:52 -- host/auth.sh@68 -- # digest=sha384 00:26:13.163 16:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:13.163 16:08:52 -- host/auth.sh@68 -- # keyid=2 00:26:13.163 16:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.163 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.163 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.163 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.163 16:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:13.163 16:08:52 -- nvmf/common.sh@717 -- # local ip 00:26:13.163 16:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:13.163 16:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:13.163 16:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.163 16:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.163 16:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:13.163 16:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.163 16:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:13.163 16:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:13.163 16:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:13.163 16:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:13.163 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.163 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.422 nvme0n1 00:26:13.422 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.422 16:08:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.422 16:08:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:13.422 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.422 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.422 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.422 16:08:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.422 16:08:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.422 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.422 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.422 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.422 16:08:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:13.422 16:08:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:13.422 16:08:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:13.422 16:08:52 -- host/auth.sh@44 -- # digest=sha384 00:26:13.422 16:08:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.422 16:08:52 -- host/auth.sh@44 -- # keyid=3 00:26:13.422 16:08:52 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:13.422 16:08:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:13.422 16:08:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:13.422 16:08:52 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:13.422 16:08:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:26:13.422 16:08:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:13.422 16:08:52 -- host/auth.sh@68 -- # digest=sha384 00:26:13.422 16:08:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:13.422 16:08:52 -- host/auth.sh@68 -- # keyid=3 00:26:13.422 16:08:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.422 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.422 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.422 16:08:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.422 16:08:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:13.422 16:08:52 -- nvmf/common.sh@717 -- # local ip 00:26:13.422 16:08:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:13.422 16:08:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:13.422 16:08:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.422 16:08:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.422 16:08:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:13.422 16:08:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.422 16:08:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:13.422 16:08:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:13.422 16:08:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:13.422 16:08:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:13.422 16:08:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.422 16:08:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.682 nvme0n1 00:26:13.682 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.682 16:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.682 16:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:13.682 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.682 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.682 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.682 16:08:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.682 16:08:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.682 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.682 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.682 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.682 16:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:13.682 16:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:13.682 16:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:13.682 16:08:53 -- host/auth.sh@44 -- # digest=sha384 00:26:13.682 16:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:13.682 16:08:53 -- host/auth.sh@44 -- # keyid=4 00:26:13.682 16:08:53 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:13.682 16:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:13.682 16:08:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:13.682 16:08:53 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:13.682 16:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:26:13.682 16:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:13.682 16:08:53 -- host/auth.sh@68 -- # digest=sha384 00:26:13.682 16:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:13.682 16:08:53 -- host/auth.sh@68 -- # keyid=4 00:26:13.682 16:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:13.682 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.682 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.682 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.682 16:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:13.682 16:08:53 -- nvmf/common.sh@717 -- # local ip 00:26:13.682 16:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:13.682 16:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:13.682 16:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.682 16:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.682 16:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:13.682 16:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.682 16:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:13.682 16:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:13.682 16:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:13.682 16:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.682 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.682 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.954 nvme0n1 00:26:13.954 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.954 16:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.954 16:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:13.954 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.954 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.954 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.954 16:08:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.954 16:08:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.954 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.954 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.954 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.954 16:08:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.954 16:08:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:13.954 16:08:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:13.954 16:08:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:13.954 16:08:53 -- host/auth.sh@44 -- # digest=sha384 00:26:13.954 16:08:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.954 16:08:53 -- host/auth.sh@44 -- # keyid=0 00:26:13.954 16:08:53 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:13.954 16:08:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:13.954 16:08:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:13.954 16:08:53 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:13.954 16:08:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:26:13.954 16:08:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:13.954 16:08:53 -- host/auth.sh@68 -- # digest=sha384 00:26:13.954 16:08:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:13.954 16:08:53 -- host/auth.sh@68 -- # keyid=0 00:26:13.954 16:08:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:13.954 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.954 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.954 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.954 16:08:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:13.954 16:08:53 -- nvmf/common.sh@717 -- # local ip 00:26:13.954 16:08:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:13.954 16:08:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:13.954 16:08:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.955 16:08:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.955 16:08:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:13.955 16:08:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.955 16:08:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:13.955 16:08:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:13.955 16:08:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:13.955 16:08:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:13.955 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.955 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:14.522 nvme0n1 00:26:14.522 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.522 16:08:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.522 16:08:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:14.522 16:08:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.522 16:08:53 -- common/autotest_common.sh@10 -- # set +x 00:26:14.522 16:08:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.522 16:08:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.522 16:08:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.522 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.522 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:14.522 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.522 16:08:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:14.522 16:08:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:14.522 16:08:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:14.522 16:08:54 -- host/auth.sh@44 -- # digest=sha384 00:26:14.522 16:08:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.522 16:08:54 -- host/auth.sh@44 -- # keyid=1 00:26:14.522 16:08:54 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:14.522 16:08:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:14.522 16:08:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:14.522 16:08:54 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:14.522 16:08:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:26:14.522 16:08:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:14.522 16:08:54 -- host/auth.sh@68 -- # digest=sha384 00:26:14.522 16:08:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:14.522 16:08:54 -- host/auth.sh@68 -- # keyid=1 00:26:14.522 16:08:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.522 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.522 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:14.522 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.522 16:08:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:14.522 16:08:54 -- nvmf/common.sh@717 -- # local ip 00:26:14.522 16:08:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:14.522 16:08:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:14.522 16:08:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.522 16:08:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.522 16:08:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:14.522 16:08:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.522 16:08:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:14.522 16:08:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:14.522 16:08:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:14.522 16:08:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:14.522 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.522 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:14.780 nvme0n1 00:26:14.780 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.780 16:08:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:14.780 16:08:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.780 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.780 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:14.780 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.780 16:08:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.780 16:08:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.780 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.780 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:15.038 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.038 16:08:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:15.038 16:08:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:15.038 16:08:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:15.038 16:08:54 -- host/auth.sh@44 -- # digest=sha384 00:26:15.038 16:08:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.038 16:08:54 -- host/auth.sh@44 -- # keyid=2 00:26:15.038 16:08:54 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:15.038 16:08:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:15.038 16:08:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:15.038 16:08:54 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:15.038 16:08:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:26:15.038 16:08:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:15.038 16:08:54 -- host/auth.sh@68 -- # digest=sha384 00:26:15.038 16:08:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:15.038 16:08:54 -- host/auth.sh@68 -- # keyid=2 00:26:15.039 16:08:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.039 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.039 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:15.039 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.039 16:08:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:15.039 16:08:54 -- nvmf/common.sh@717 -- # local ip 00:26:15.039 16:08:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:15.039 16:08:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:15.039 16:08:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.039 16:08:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.039 16:08:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:15.039 16:08:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.039 16:08:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:15.039 16:08:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:15.039 16:08:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:15.039 16:08:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:15.039 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.039 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:15.297 nvme0n1 00:26:15.297 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.297 16:08:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:15.297 16:08:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.297 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.297 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:15.297 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.297 16:08:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.297 16:08:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.297 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.297 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:15.297 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.297 16:08:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:15.297 16:08:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:15.297 16:08:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:15.297 16:08:54 -- host/auth.sh@44 -- # digest=sha384 00:26:15.297 16:08:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.297 16:08:54 -- host/auth.sh@44 -- # keyid=3 00:26:15.297 16:08:54 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:15.297 16:08:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:15.297 16:08:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:15.297 16:08:54 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:15.297 16:08:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:26:15.297 16:08:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:15.297 16:08:54 -- host/auth.sh@68 -- # digest=sha384 00:26:15.297 16:08:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:15.297 16:08:54 -- host/auth.sh@68 -- # keyid=3 00:26:15.297 16:08:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.297 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.298 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:15.298 16:08:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.298 16:08:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:15.298 16:08:54 -- nvmf/common.sh@717 -- # local ip 00:26:15.298 16:08:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:15.298 16:08:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:15.298 16:08:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.298 16:08:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.298 16:08:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:15.298 16:08:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.298 16:08:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:15.298 16:08:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:15.298 16:08:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:15.298 16:08:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:15.298 16:08:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.298 16:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:15.865 nvme0n1 00:26:15.865 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.865 16:08:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.865 16:08:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:15.865 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.865 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:15.865 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.865 16:08:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.865 16:08:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.865 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.865 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:15.865 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.866 16:08:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:15.866 16:08:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:15.866 16:08:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:15.866 16:08:55 -- host/auth.sh@44 -- # digest=sha384 00:26:15.866 16:08:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.866 16:08:55 -- host/auth.sh@44 -- # keyid=4 00:26:15.866 16:08:55 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:15.866 16:08:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:15.866 16:08:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:15.866 16:08:55 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:15.866 16:08:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:26:15.866 16:08:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:15.866 16:08:55 -- host/auth.sh@68 -- # digest=sha384 00:26:15.866 16:08:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:15.866 16:08:55 -- host/auth.sh@68 -- # keyid=4 00:26:15.866 16:08:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:15.866 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.866 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:15.866 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.866 16:08:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:15.866 16:08:55 -- nvmf/common.sh@717 -- # local ip 00:26:15.866 16:08:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:15.866 16:08:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:15.866 16:08:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.866 16:08:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.866 16:08:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:15.866 16:08:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.866 16:08:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:15.866 16:08:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:15.866 16:08:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:15.866 16:08:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.866 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.866 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:16.124 nvme0n1 00:26:16.124 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.124 16:08:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.124 16:08:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:16.124 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.124 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:16.124 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.124 16:08:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.124 16:08:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.124 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.124 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:16.383 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.383 16:08:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.383 16:08:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:16.383 16:08:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:16.383 16:08:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:16.383 16:08:55 -- host/auth.sh@44 -- # digest=sha384 00:26:16.383 16:08:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.383 16:08:55 -- host/auth.sh@44 -- # keyid=0 00:26:16.383 16:08:55 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:16.383 16:08:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:16.383 16:08:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:16.383 16:08:55 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:16.383 16:08:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:26:16.383 16:08:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:16.383 16:08:55 -- host/auth.sh@68 -- # digest=sha384 00:26:16.383 16:08:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:16.383 16:08:55 -- host/auth.sh@68 -- # keyid=0 00:26:16.383 16:08:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.383 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.383 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:16.383 16:08:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.383 16:08:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:16.383 16:08:55 -- nvmf/common.sh@717 -- # local ip 00:26:16.383 16:08:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:16.383 16:08:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:16.383 16:08:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.383 16:08:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.383 16:08:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:16.383 16:08:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.383 16:08:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:16.383 16:08:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:16.383 16:08:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:16.383 16:08:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:16.383 16:08:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.383 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:16.950 nvme0n1 00:26:16.950 16:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.950 16:08:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.950 16:08:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:16.950 16:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.950 16:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:16.950 16:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.950 16:08:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.950 16:08:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.950 16:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.950 16:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:16.950 16:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.950 16:08:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:16.950 16:08:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:16.950 16:08:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:16.950 16:08:56 -- host/auth.sh@44 -- # digest=sha384 00:26:16.950 16:08:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.950 16:08:56 -- host/auth.sh@44 -- # keyid=1 00:26:16.950 16:08:56 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:16.950 16:08:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:16.950 16:08:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:16.950 16:08:56 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:16.950 16:08:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:26:16.950 16:08:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:16.950 16:08:56 -- host/auth.sh@68 -- # digest=sha384 00:26:16.950 16:08:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:16.950 16:08:56 -- host/auth.sh@68 -- # keyid=1 00:26:16.950 16:08:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.950 16:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.950 16:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:16.950 16:08:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.950 16:08:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:16.950 16:08:56 -- nvmf/common.sh@717 -- # local ip 00:26:16.950 16:08:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:16.950 16:08:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:16.950 16:08:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.950 16:08:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.950 16:08:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:16.950 16:08:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.950 16:08:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:16.950 16:08:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:16.950 16:08:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:16.950 16:08:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:16.950 16:08:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.950 16:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:17.518 nvme0n1 00:26:17.518 16:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.518 16:08:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.518 16:08:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:17.518 16:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.518 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:17.518 16:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.518 16:08:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.518 16:08:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.518 16:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.518 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:17.518 16:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.518 16:08:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:17.518 16:08:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:17.518 16:08:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:17.518 16:08:57 -- host/auth.sh@44 -- # digest=sha384 00:26:17.518 16:08:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.518 16:08:57 -- host/auth.sh@44 -- # keyid=2 00:26:17.518 16:08:57 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:17.518 16:08:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:17.518 16:08:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:17.518 16:08:57 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:17.518 16:08:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:26:17.518 16:08:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:17.518 16:08:57 -- host/auth.sh@68 -- # digest=sha384 00:26:17.518 16:08:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:17.518 16:08:57 -- host/auth.sh@68 -- # keyid=2 00:26:17.518 16:08:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.518 16:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.518 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:17.518 16:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.518 16:08:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:17.518 16:08:57 -- nvmf/common.sh@717 -- # local ip 00:26:17.518 16:08:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:17.518 16:08:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:17.518 16:08:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.518 16:08:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.518 16:08:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:17.518 16:08:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.518 16:08:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:17.518 16:08:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:17.518 16:08:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:17.518 16:08:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:17.518 16:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.519 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.087 nvme0n1 00:26:18.087 16:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.087 16:08:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.087 16:08:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:18.087 16:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.087 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.087 16:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.087 16:08:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.087 16:08:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.087 16:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.087 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.087 16:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.087 16:08:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:18.087 16:08:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:18.087 16:08:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:18.087 16:08:57 -- host/auth.sh@44 -- # digest=sha384 00:26:18.087 16:08:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.087 16:08:57 -- host/auth.sh@44 -- # keyid=3 00:26:18.087 16:08:57 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:18.087 16:08:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:18.087 16:08:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:18.087 16:08:57 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:18.087 16:08:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:26:18.087 16:08:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:18.087 16:08:57 -- host/auth.sh@68 -- # digest=sha384 00:26:18.087 16:08:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:18.087 16:08:57 -- host/auth.sh@68 -- # keyid=3 00:26:18.087 16:08:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.087 16:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.087 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.087 16:08:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.346 16:08:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:18.346 16:08:57 -- nvmf/common.sh@717 -- # local ip 00:26:18.346 16:08:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:18.346 16:08:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:18.346 16:08:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.346 16:08:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.346 16:08:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:18.346 16:08:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.346 16:08:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:18.346 16:08:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:18.346 16:08:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:18.346 16:08:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:18.346 16:08:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.346 16:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.915 nvme0n1 00:26:18.915 16:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.915 16:08:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.915 16:08:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:18.915 16:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.915 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:18.915 16:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.915 16:08:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.915 16:08:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.915 16:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.915 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:18.915 16:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.915 16:08:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:18.915 16:08:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:18.915 16:08:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:18.915 16:08:58 -- host/auth.sh@44 -- # digest=sha384 00:26:18.915 16:08:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.915 16:08:58 -- host/auth.sh@44 -- # keyid=4 00:26:18.915 16:08:58 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:18.915 16:08:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:18.915 16:08:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:18.915 16:08:58 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:18.915 16:08:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:26:18.915 16:08:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:18.915 16:08:58 -- host/auth.sh@68 -- # digest=sha384 00:26:18.915 16:08:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:18.915 16:08:58 -- host/auth.sh@68 -- # keyid=4 00:26:18.915 16:08:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:18.915 16:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.915 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:18.915 16:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.915 16:08:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:18.915 16:08:58 -- nvmf/common.sh@717 -- # local ip 00:26:18.915 16:08:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:18.915 16:08:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:18.915 16:08:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.915 16:08:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.915 16:08:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:18.915 16:08:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.915 16:08:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:18.915 16:08:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:18.915 16:08:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:18.915 16:08:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.915 16:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.915 16:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:19.484 nvme0n1 00:26:19.484 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.484 16:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.484 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.484 16:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:19.484 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:19.484 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.484 16:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.484 16:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.484 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.484 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:19.484 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.484 16:08:59 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:19.484 16:08:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.484 16:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:19.484 16:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:19.484 16:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:19.484 16:08:59 -- host/auth.sh@44 -- # digest=sha512 00:26:19.484 16:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.484 16:08:59 -- host/auth.sh@44 -- # keyid=0 00:26:19.484 16:08:59 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:19.484 16:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:19.484 16:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:19.484 16:08:59 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:19.484 16:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:26:19.484 16:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:19.484 16:08:59 -- host/auth.sh@68 -- # digest=sha512 00:26:19.484 16:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:19.484 16:08:59 -- host/auth.sh@68 -- # keyid=0 00:26:19.484 16:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.484 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.484 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:19.484 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.484 16:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:19.484 16:08:59 -- nvmf/common.sh@717 -- # local ip 00:26:19.484 16:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:19.484 16:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:19.484 16:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.484 16:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.484 16:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:19.484 16:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.484 16:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:19.484 16:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:19.484 16:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:19.484 16:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:19.484 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.484 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:19.744 nvme0n1 00:26:19.744 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.744 16:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.744 16:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:19.744 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.744 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:19.744 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.744 16:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.744 16:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.744 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.744 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:19.744 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.744 16:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:19.744 16:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:19.744 16:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:19.744 16:08:59 -- host/auth.sh@44 -- # digest=sha512 00:26:19.744 16:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.744 16:08:59 -- host/auth.sh@44 -- # keyid=1 00:26:19.744 16:08:59 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:19.744 16:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:19.744 16:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:19.744 16:08:59 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:19.744 16:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:26:19.744 16:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:19.744 16:08:59 -- host/auth.sh@68 -- # digest=sha512 00:26:19.744 16:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:19.744 16:08:59 -- host/auth.sh@68 -- # keyid=1 00:26:19.744 16:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.744 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.744 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:19.744 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.744 16:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:19.744 16:08:59 -- nvmf/common.sh@717 -- # local ip 00:26:19.744 16:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:19.744 16:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:19.744 16:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.744 16:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.744 16:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:19.744 16:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.744 16:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:19.744 16:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:19.744 16:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:19.744 16:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:19.744 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.744 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.003 nvme0n1 00:26:20.003 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.003 16:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.003 16:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:20.003 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.003 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.003 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.003 16:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.003 16:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.003 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.003 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.003 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.003 16:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:20.003 16:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:20.003 16:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:20.003 16:08:59 -- host/auth.sh@44 -- # digest=sha512 00:26:20.003 16:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.003 16:08:59 -- host/auth.sh@44 -- # keyid=2 00:26:20.003 16:08:59 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:20.003 16:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:20.003 16:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:20.003 16:08:59 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:20.003 16:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:26:20.003 16:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:20.003 16:08:59 -- host/auth.sh@68 -- # digest=sha512 00:26:20.003 16:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:20.003 16:08:59 -- host/auth.sh@68 -- # keyid=2 00:26:20.003 16:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:20.003 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.003 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.003 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.003 16:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:20.003 16:08:59 -- nvmf/common.sh@717 -- # local ip 00:26:20.003 16:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:20.003 16:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:20.003 16:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.003 16:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.003 16:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:20.003 16:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.003 16:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:20.003 16:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:20.003 16:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:20.003 16:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:20.003 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.003 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.262 nvme0n1 00:26:20.262 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.262 16:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.262 16:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:20.262 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.262 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.262 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.262 16:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.262 16:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.262 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.262 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.262 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.262 16:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:20.262 16:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:20.262 16:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:20.262 16:08:59 -- host/auth.sh@44 -- # digest=sha512 00:26:20.262 16:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.262 16:08:59 -- host/auth.sh@44 -- # keyid=3 00:26:20.262 16:08:59 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:20.262 16:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:20.262 16:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:20.262 16:08:59 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:20.262 16:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:26:20.262 16:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:20.262 16:08:59 -- host/auth.sh@68 -- # digest=sha512 00:26:20.262 16:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:20.262 16:08:59 -- host/auth.sh@68 -- # keyid=3 00:26:20.262 16:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:20.262 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.262 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.262 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.262 16:08:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:20.262 16:08:59 -- nvmf/common.sh@717 -- # local ip 00:26:20.262 16:08:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:20.262 16:08:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:20.262 16:08:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.262 16:08:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.262 16:08:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:20.262 16:08:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.262 16:08:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:20.262 16:08:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:20.262 16:08:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:20.262 16:08:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:20.262 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.262 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.262 nvme0n1 00:26:20.262 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.262 16:08:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:20.262 16:08:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.262 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.262 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.551 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.551 16:08:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.551 16:08:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.551 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.551 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.551 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.551 16:08:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:20.551 16:08:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:20.551 16:08:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:20.551 16:08:59 -- host/auth.sh@44 -- # digest=sha512 00:26:20.551 16:08:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.551 16:08:59 -- host/auth.sh@44 -- # keyid=4 00:26:20.551 16:08:59 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:20.551 16:08:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:20.551 16:08:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:20.551 16:08:59 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:20.551 16:08:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:26:20.551 16:08:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:20.551 16:08:59 -- host/auth.sh@68 -- # digest=sha512 00:26:20.551 16:08:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:20.551 16:08:59 -- host/auth.sh@68 -- # keyid=4 00:26:20.551 16:08:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:20.551 16:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.551 16:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.551 16:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.551 16:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:20.551 16:09:00 -- nvmf/common.sh@717 -- # local ip 00:26:20.551 16:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:20.551 16:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:20.551 16:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.551 16:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.551 16:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:20.551 16:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.551 16:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:20.551 16:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:20.551 16:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:20.552 16:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.552 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.552 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:20.552 nvme0n1 00:26:20.552 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.552 16:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.552 16:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:20.552 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.552 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:20.552 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.552 16:09:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.552 16:09:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.552 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.552 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:20.552 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.552 16:09:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.552 16:09:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:20.552 16:09:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:20.552 16:09:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:20.552 16:09:00 -- host/auth.sh@44 -- # digest=sha512 00:26:20.552 16:09:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.552 16:09:00 -- host/auth.sh@44 -- # keyid=0 00:26:20.552 16:09:00 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:20.552 16:09:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:20.552 16:09:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:20.552 16:09:00 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:20.552 16:09:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:26:20.552 16:09:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:20.552 16:09:00 -- host/auth.sh@68 -- # digest=sha512 00:26:20.552 16:09:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:20.552 16:09:00 -- host/auth.sh@68 -- # keyid=0 00:26:20.552 16:09:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.552 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.830 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:20.830 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.830 16:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:20.830 16:09:00 -- nvmf/common.sh@717 -- # local ip 00:26:20.830 16:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:20.830 16:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:20.830 16:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.830 16:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.830 16:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:20.830 16:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.830 16:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:20.830 16:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:20.830 16:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:20.830 16:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:20.830 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.830 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:20.830 nvme0n1 00:26:20.830 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.830 16:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:20.830 16:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.830 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.830 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:20.830 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.831 16:09:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.831 16:09:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.831 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.831 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:20.831 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.831 16:09:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:20.831 16:09:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:20.831 16:09:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:20.831 16:09:00 -- host/auth.sh@44 -- # digest=sha512 00:26:20.831 16:09:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.831 16:09:00 -- host/auth.sh@44 -- # keyid=1 00:26:20.831 16:09:00 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:20.831 16:09:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:20.831 16:09:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:20.831 16:09:00 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:20.831 16:09:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:26:20.831 16:09:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:20.831 16:09:00 -- host/auth.sh@68 -- # digest=sha512 00:26:20.831 16:09:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:20.831 16:09:00 -- host/auth.sh@68 -- # keyid=1 00:26:20.831 16:09:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.831 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.831 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:20.831 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.831 16:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:20.831 16:09:00 -- nvmf/common.sh@717 -- # local ip 00:26:20.831 16:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:20.831 16:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:20.831 16:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.831 16:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.831 16:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:20.831 16:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.831 16:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:20.831 16:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:20.831 16:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:20.831 16:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:20.831 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.831 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.100 nvme0n1 00:26:21.100 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.100 16:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.100 16:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:21.100 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.100 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.100 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.100 16:09:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.100 16:09:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.100 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.100 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.100 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.100 16:09:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:21.100 16:09:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:21.100 16:09:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:21.100 16:09:00 -- host/auth.sh@44 -- # digest=sha512 00:26:21.100 16:09:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.100 16:09:00 -- host/auth.sh@44 -- # keyid=2 00:26:21.100 16:09:00 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:21.100 16:09:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:21.100 16:09:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:21.100 16:09:00 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:21.100 16:09:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:26:21.100 16:09:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:21.100 16:09:00 -- host/auth.sh@68 -- # digest=sha512 00:26:21.100 16:09:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:21.100 16:09:00 -- host/auth.sh@68 -- # keyid=2 00:26:21.100 16:09:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.100 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.100 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.100 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.100 16:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:21.100 16:09:00 -- nvmf/common.sh@717 -- # local ip 00:26:21.100 16:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:21.100 16:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:21.100 16:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.100 16:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.100 16:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:21.100 16:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.100 16:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:21.100 16:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:21.100 16:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:21.100 16:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:21.100 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.100 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.359 nvme0n1 00:26:21.359 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.359 16:09:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.359 16:09:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:21.359 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.359 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.359 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.359 16:09:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.359 16:09:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.359 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.359 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.359 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.359 16:09:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:21.359 16:09:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:21.359 16:09:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:21.359 16:09:00 -- host/auth.sh@44 -- # digest=sha512 00:26:21.359 16:09:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.359 16:09:00 -- host/auth.sh@44 -- # keyid=3 00:26:21.359 16:09:00 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:21.359 16:09:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:21.359 16:09:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:21.359 16:09:00 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:21.359 16:09:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:26:21.359 16:09:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:21.359 16:09:00 -- host/auth.sh@68 -- # digest=sha512 00:26:21.359 16:09:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:21.359 16:09:00 -- host/auth.sh@68 -- # keyid=3 00:26:21.359 16:09:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.359 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.359 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.359 16:09:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.359 16:09:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:21.359 16:09:00 -- nvmf/common.sh@717 -- # local ip 00:26:21.359 16:09:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:21.359 16:09:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:21.359 16:09:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.359 16:09:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.359 16:09:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:21.359 16:09:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.359 16:09:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:21.359 16:09:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:21.359 16:09:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:21.359 16:09:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:21.359 16:09:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.359 16:09:00 -- common/autotest_common.sh@10 -- # set +x 00:26:21.618 nvme0n1 00:26:21.618 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.618 16:09:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.618 16:09:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:21.618 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.618 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:21.618 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.618 16:09:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.618 16:09:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.618 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.618 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:21.618 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.618 16:09:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:21.618 16:09:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:21.618 16:09:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:21.618 16:09:01 -- host/auth.sh@44 -- # digest=sha512 00:26:21.618 16:09:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.618 16:09:01 -- host/auth.sh@44 -- # keyid=4 00:26:21.618 16:09:01 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:21.618 16:09:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:21.618 16:09:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:21.618 16:09:01 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:21.618 16:09:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:26:21.618 16:09:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:21.618 16:09:01 -- host/auth.sh@68 -- # digest=sha512 00:26:21.618 16:09:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:21.618 16:09:01 -- host/auth.sh@68 -- # keyid=4 00:26:21.618 16:09:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:21.618 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.618 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:21.618 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.618 16:09:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:21.618 16:09:01 -- nvmf/common.sh@717 -- # local ip 00:26:21.618 16:09:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:21.618 16:09:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:21.618 16:09:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.618 16:09:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.618 16:09:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:21.618 16:09:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.619 16:09:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:21.619 16:09:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:21.619 16:09:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:21.619 16:09:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.619 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.619 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:21.878 nvme0n1 00:26:21.878 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.878 16:09:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.878 16:09:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:21.878 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.878 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:21.878 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.878 16:09:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.878 16:09:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.878 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.878 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:21.878 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.878 16:09:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.878 16:09:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:21.878 16:09:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:21.878 16:09:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:21.878 16:09:01 -- host/auth.sh@44 -- # digest=sha512 00:26:21.878 16:09:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.878 16:09:01 -- host/auth.sh@44 -- # keyid=0 00:26:21.878 16:09:01 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:21.878 16:09:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:21.878 16:09:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:21.878 16:09:01 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:21.878 16:09:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:26:21.878 16:09:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:21.878 16:09:01 -- host/auth.sh@68 -- # digest=sha512 00:26:21.878 16:09:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:21.878 16:09:01 -- host/auth.sh@68 -- # keyid=0 00:26:21.878 16:09:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.878 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.878 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:21.878 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:21.878 16:09:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:21.878 16:09:01 -- nvmf/common.sh@717 -- # local ip 00:26:21.878 16:09:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:21.878 16:09:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:21.878 16:09:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.878 16:09:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.878 16:09:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:21.878 16:09:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.878 16:09:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:21.878 16:09:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:21.878 16:09:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:21.878 16:09:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:21.878 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.878 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:22.137 nvme0n1 00:26:22.137 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.137 16:09:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.137 16:09:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:22.137 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.137 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:22.137 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.137 16:09:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.137 16:09:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.137 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.137 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:22.137 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.137 16:09:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:22.137 16:09:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:22.137 16:09:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:22.137 16:09:01 -- host/auth.sh@44 -- # digest=sha512 00:26:22.137 16:09:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.137 16:09:01 -- host/auth.sh@44 -- # keyid=1 00:26:22.137 16:09:01 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:22.137 16:09:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:22.137 16:09:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:22.137 16:09:01 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:22.137 16:09:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:26:22.137 16:09:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:22.137 16:09:01 -- host/auth.sh@68 -- # digest=sha512 00:26:22.137 16:09:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:22.137 16:09:01 -- host/auth.sh@68 -- # keyid=1 00:26:22.137 16:09:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.137 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.137 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:22.137 16:09:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.396 16:09:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:22.396 16:09:01 -- nvmf/common.sh@717 -- # local ip 00:26:22.396 16:09:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:22.396 16:09:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:22.396 16:09:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.396 16:09:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.396 16:09:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:22.396 16:09:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.396 16:09:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:22.396 16:09:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:22.396 16:09:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:22.396 16:09:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:22.396 16:09:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.396 16:09:01 -- common/autotest_common.sh@10 -- # set +x 00:26:22.396 nvme0n1 00:26:22.396 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.396 16:09:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.396 16:09:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:22.396 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.396 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:22.655 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.655 16:09:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.655 16:09:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.655 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.655 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:22.655 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.655 16:09:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:22.655 16:09:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:22.655 16:09:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:22.655 16:09:02 -- host/auth.sh@44 -- # digest=sha512 00:26:22.655 16:09:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.655 16:09:02 -- host/auth.sh@44 -- # keyid=2 00:26:22.655 16:09:02 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:22.655 16:09:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:22.655 16:09:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:22.655 16:09:02 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:22.655 16:09:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:26:22.655 16:09:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:22.655 16:09:02 -- host/auth.sh@68 -- # digest=sha512 00:26:22.655 16:09:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:22.655 16:09:02 -- host/auth.sh@68 -- # keyid=2 00:26:22.655 16:09:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.655 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.655 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:22.655 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.655 16:09:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:22.655 16:09:02 -- nvmf/common.sh@717 -- # local ip 00:26:22.655 16:09:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:22.655 16:09:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:22.655 16:09:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.655 16:09:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.655 16:09:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:22.655 16:09:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.655 16:09:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:22.655 16:09:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:22.655 16:09:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:22.655 16:09:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:22.655 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.655 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 nvme0n1 00:26:22.914 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.914 16:09:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.914 16:09:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:22.914 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.914 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.914 16:09:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.914 16:09:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.914 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.914 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.914 16:09:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:22.914 16:09:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:22.914 16:09:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:22.914 16:09:02 -- host/auth.sh@44 -- # digest=sha512 00:26:22.914 16:09:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.914 16:09:02 -- host/auth.sh@44 -- # keyid=3 00:26:22.914 16:09:02 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:22.914 16:09:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:22.914 16:09:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:22.914 16:09:02 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:22.914 16:09:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:26:22.914 16:09:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:22.914 16:09:02 -- host/auth.sh@68 -- # digest=sha512 00:26:22.914 16:09:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:22.914 16:09:02 -- host/auth.sh@68 -- # keyid=3 00:26:22.914 16:09:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.914 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.914 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.914 16:09:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:22.914 16:09:02 -- nvmf/common.sh@717 -- # local ip 00:26:22.914 16:09:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:22.914 16:09:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:22.915 16:09:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.915 16:09:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.915 16:09:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:22.915 16:09:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.915 16:09:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:22.915 16:09:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:22.915 16:09:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:22.915 16:09:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:22.915 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.915 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:23.174 nvme0n1 00:26:23.174 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.174 16:09:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.174 16:09:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:23.174 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.174 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:23.174 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.174 16:09:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.174 16:09:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.174 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.174 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:23.174 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.174 16:09:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:23.174 16:09:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:23.174 16:09:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:23.174 16:09:02 -- host/auth.sh@44 -- # digest=sha512 00:26:23.174 16:09:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.174 16:09:02 -- host/auth.sh@44 -- # keyid=4 00:26:23.174 16:09:02 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:23.174 16:09:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:23.174 16:09:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:23.174 16:09:02 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:23.174 16:09:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:26:23.174 16:09:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:23.174 16:09:02 -- host/auth.sh@68 -- # digest=sha512 00:26:23.174 16:09:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:23.174 16:09:02 -- host/auth.sh@68 -- # keyid=4 00:26:23.174 16:09:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:23.174 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.174 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:23.174 16:09:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.174 16:09:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:23.174 16:09:02 -- nvmf/common.sh@717 -- # local ip 00:26:23.174 16:09:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:23.174 16:09:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:23.174 16:09:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.174 16:09:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.174 16:09:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:23.174 16:09:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.174 16:09:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:23.174 16:09:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:23.174 16:09:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:23.174 16:09:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.174 16:09:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.174 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:26:23.433 nvme0n1 00:26:23.433 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.433 16:09:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:23.433 16:09:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.433 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.433 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:23.433 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.433 16:09:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.433 16:09:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.433 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.433 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:23.433 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.433 16:09:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.433 16:09:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:23.433 16:09:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:23.433 16:09:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:23.433 16:09:03 -- host/auth.sh@44 -- # digest=sha512 00:26:23.433 16:09:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.433 16:09:03 -- host/auth.sh@44 -- # keyid=0 00:26:23.433 16:09:03 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:23.433 16:09:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:23.433 16:09:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:23.433 16:09:03 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:23.433 16:09:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:26:23.433 16:09:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:23.433 16:09:03 -- host/auth.sh@68 -- # digest=sha512 00:26:23.433 16:09:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:23.433 16:09:03 -- host/auth.sh@68 -- # keyid=0 00:26:23.433 16:09:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.433 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.434 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:23.434 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.434 16:09:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:23.434 16:09:03 -- nvmf/common.sh@717 -- # local ip 00:26:23.434 16:09:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:23.434 16:09:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:23.434 16:09:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.434 16:09:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.434 16:09:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:23.434 16:09:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.434 16:09:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:23.434 16:09:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:23.434 16:09:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:23.434 16:09:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:23.434 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.434 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:24.002 nvme0n1 00:26:24.002 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.002 16:09:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.002 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.002 16:09:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:24.002 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:24.002 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.002 16:09:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.002 16:09:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.002 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.002 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:24.002 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.002 16:09:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:24.002 16:09:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:24.002 16:09:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:24.002 16:09:03 -- host/auth.sh@44 -- # digest=sha512 00:26:24.002 16:09:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.002 16:09:03 -- host/auth.sh@44 -- # keyid=1 00:26:24.002 16:09:03 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:24.002 16:09:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:24.002 16:09:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:24.002 16:09:03 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:24.002 16:09:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:26:24.002 16:09:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:24.002 16:09:03 -- host/auth.sh@68 -- # digest=sha512 00:26:24.002 16:09:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:24.002 16:09:03 -- host/auth.sh@68 -- # keyid=1 00:26:24.002 16:09:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.002 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.002 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:24.002 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.002 16:09:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:24.002 16:09:03 -- nvmf/common.sh@717 -- # local ip 00:26:24.002 16:09:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:24.002 16:09:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:24.002 16:09:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.002 16:09:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.002 16:09:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:24.002 16:09:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.002 16:09:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:24.002 16:09:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:24.002 16:09:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:24.002 16:09:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:24.002 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.002 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:24.261 nvme0n1 00:26:24.261 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.520 16:09:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:24.520 16:09:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.520 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.520 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:24.520 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.520 16:09:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.520 16:09:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.520 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.520 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:24.520 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.520 16:09:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:24.520 16:09:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:24.520 16:09:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:24.520 16:09:03 -- host/auth.sh@44 -- # digest=sha512 00:26:24.520 16:09:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.520 16:09:03 -- host/auth.sh@44 -- # keyid=2 00:26:24.520 16:09:03 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:24.520 16:09:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:24.520 16:09:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:24.520 16:09:03 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:24.520 16:09:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:26:24.520 16:09:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:24.520 16:09:03 -- host/auth.sh@68 -- # digest=sha512 00:26:24.520 16:09:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:24.520 16:09:03 -- host/auth.sh@68 -- # keyid=2 00:26:24.520 16:09:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.520 16:09:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.520 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:24.520 16:09:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.520 16:09:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:24.520 16:09:04 -- nvmf/common.sh@717 -- # local ip 00:26:24.520 16:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:24.520 16:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:24.520 16:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.520 16:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.520 16:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:24.520 16:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.520 16:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:24.520 16:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:24.520 16:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:24.520 16:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:24.520 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.520 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:24.779 nvme0n1 00:26:24.779 16:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.779 16:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.779 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.779 16:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:24.779 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:24.779 16:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.779 16:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.779 16:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.779 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.779 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:24.779 16:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.779 16:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:24.779 16:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:24.779 16:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:24.779 16:09:04 -- host/auth.sh@44 -- # digest=sha512 00:26:24.779 16:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.779 16:09:04 -- host/auth.sh@44 -- # keyid=3 00:26:24.779 16:09:04 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:24.779 16:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:24.779 16:09:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:24.779 16:09:04 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:24.779 16:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:26:24.779 16:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:24.779 16:09:04 -- host/auth.sh@68 -- # digest=sha512 00:26:24.779 16:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:24.779 16:09:04 -- host/auth.sh@68 -- # keyid=3 00:26:24.779 16:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.779 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.779 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:24.779 16:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.779 16:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:25.037 16:09:04 -- nvmf/common.sh@717 -- # local ip 00:26:25.037 16:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:25.037 16:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:25.037 16:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.037 16:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.037 16:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:25.037 16:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.037 16:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:25.037 16:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:25.037 16:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:25.037 16:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:25.037 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.037 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:25.296 nvme0n1 00:26:25.296 16:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.296 16:09:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.296 16:09:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:25.296 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.296 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:25.296 16:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.296 16:09:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.296 16:09:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.296 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.296 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:25.296 16:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.296 16:09:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:25.296 16:09:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:25.296 16:09:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:25.296 16:09:04 -- host/auth.sh@44 -- # digest=sha512 00:26:25.296 16:09:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.296 16:09:04 -- host/auth.sh@44 -- # keyid=4 00:26:25.296 16:09:04 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:25.296 16:09:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:25.296 16:09:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:25.296 16:09:04 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:25.296 16:09:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:26:25.296 16:09:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:25.296 16:09:04 -- host/auth.sh@68 -- # digest=sha512 00:26:25.296 16:09:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:25.296 16:09:04 -- host/auth.sh@68 -- # keyid=4 00:26:25.296 16:09:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:25.296 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.296 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:25.296 16:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.296 16:09:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:25.296 16:09:04 -- nvmf/common.sh@717 -- # local ip 00:26:25.296 16:09:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:25.296 16:09:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:25.296 16:09:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.296 16:09:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.296 16:09:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:25.296 16:09:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.296 16:09:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:25.296 16:09:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:25.296 16:09:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:25.296 16:09:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.296 16:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.296 16:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:25.863 nvme0n1 00:26:25.863 16:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.863 16:09:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.863 16:09:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:25.863 16:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.863 16:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:25.863 16:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.863 16:09:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.863 16:09:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.863 16:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.863 16:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:25.863 16:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.863 16:09:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.863 16:09:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:25.863 16:09:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:25.863 16:09:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:25.863 16:09:05 -- host/auth.sh@44 -- # digest=sha512 00:26:25.863 16:09:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.863 16:09:05 -- host/auth.sh@44 -- # keyid=0 00:26:25.863 16:09:05 -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:25.863 16:09:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:25.863 16:09:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:25.863 16:09:05 -- host/auth.sh@49 -- # echo DHHC-1:00:ZDc2MTQ5YWQ1N2E5ZjA1MmExZGZjYzQxZWM1MmJlODLvJ/yF: 00:26:25.863 16:09:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:26:25.863 16:09:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:25.863 16:09:05 -- host/auth.sh@68 -- # digest=sha512 00:26:25.863 16:09:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:25.863 16:09:05 -- host/auth.sh@68 -- # keyid=0 00:26:25.863 16:09:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:25.863 16:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.863 16:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:25.863 16:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.863 16:09:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:25.863 16:09:05 -- nvmf/common.sh@717 -- # local ip 00:26:25.863 16:09:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:25.863 16:09:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:25.863 16:09:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.863 16:09:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.863 16:09:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:25.864 16:09:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.864 16:09:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:25.864 16:09:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:25.864 16:09:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:25.864 16:09:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:25.864 16:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.864 16:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:26.431 nvme0n1 00:26:26.431 16:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.431 16:09:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.431 16:09:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:26.431 16:09:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.431 16:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:26.431 16:09:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.431 16:09:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.431 16:09:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.431 16:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.431 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:26.431 16:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.431 16:09:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:26.431 16:09:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:26.431 16:09:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:26.431 16:09:06 -- host/auth.sh@44 -- # digest=sha512 00:26:26.431 16:09:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.431 16:09:06 -- host/auth.sh@44 -- # keyid=1 00:26:26.432 16:09:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:26.432 16:09:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:26.432 16:09:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:26.432 16:09:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:26.432 16:09:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:26:26.432 16:09:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:26.432 16:09:06 -- host/auth.sh@68 -- # digest=sha512 00:26:26.432 16:09:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:26.432 16:09:06 -- host/auth.sh@68 -- # keyid=1 00:26:26.432 16:09:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.432 16:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.432 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:26.432 16:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.432 16:09:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:26.432 16:09:06 -- nvmf/common.sh@717 -- # local ip 00:26:26.432 16:09:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:26.432 16:09:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:26.432 16:09:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.432 16:09:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.432 16:09:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:26.432 16:09:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.432 16:09:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:26.432 16:09:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:26.432 16:09:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:26.432 16:09:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:26.432 16:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.432 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:26.999 nvme0n1 00:26:26.999 16:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.999 16:09:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.999 16:09:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:26.999 16:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.999 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:26.999 16:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.999 16:09:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.999 16:09:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.999 16:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.999 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:26.999 16:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.999 16:09:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:26.999 16:09:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:26.999 16:09:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:26.999 16:09:06 -- host/auth.sh@44 -- # digest=sha512 00:26:26.999 16:09:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.999 16:09:06 -- host/auth.sh@44 -- # keyid=2 00:26:26.999 16:09:06 -- host/auth.sh@45 -- # key=DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:26.999 16:09:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:26.999 16:09:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:26.999 16:09:06 -- host/auth.sh@49 -- # echo DHHC-1:01:NTVkZTVjYzhjYjBkMWIxMzdhMjNkOTU5MTdjMzVkOTUwmEqn: 00:26:26.999 16:09:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:26:27.000 16:09:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.000 16:09:06 -- host/auth.sh@68 -- # digest=sha512 00:26:27.000 16:09:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:27.000 16:09:06 -- host/auth.sh@68 -- # keyid=2 00:26:27.000 16:09:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.000 16:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.000 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.259 16:09:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.259 16:09:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.259 16:09:06 -- nvmf/common.sh@717 -- # local ip 00:26:27.259 16:09:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.259 16:09:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.259 16:09:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.259 16:09:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.259 16:09:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.259 16:09:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.259 16:09:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.259 16:09:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.259 16:09:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.259 16:09:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:27.259 16:09:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.259 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.827 nvme0n1 00:26:27.827 16:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.827 16:09:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.827 16:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.827 16:09:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.827 16:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:27.827 16:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.827 16:09:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.827 16:09:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.827 16:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.827 16:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:27.827 16:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.827 16:09:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.827 16:09:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:27.827 16:09:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.827 16:09:07 -- host/auth.sh@44 -- # digest=sha512 00:26:27.827 16:09:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.827 16:09:07 -- host/auth.sh@44 -- # keyid=3 00:26:27.827 16:09:07 -- host/auth.sh@45 -- # key=DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:27.827 16:09:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:27.827 16:09:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:27.827 16:09:07 -- host/auth.sh@49 -- # echo DHHC-1:02:MjZiZjdmNzI2OWExNjI5ZGQ4MzkxNzg4Njc1NjM3ZTg0ZGMxMGYyM2E0YTk2NTdhMhXH+A==: 00:26:27.827 16:09:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:26:27.827 16:09:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.827 16:09:07 -- host/auth.sh@68 -- # digest=sha512 00:26:27.827 16:09:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:27.827 16:09:07 -- host/auth.sh@68 -- # keyid=3 00:26:27.827 16:09:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.827 16:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.827 16:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:27.827 16:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.827 16:09:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.827 16:09:07 -- nvmf/common.sh@717 -- # local ip 00:26:27.827 16:09:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.827 16:09:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.827 16:09:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.827 16:09:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.827 16:09:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.827 16:09:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.827 16:09:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.827 16:09:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.827 16:09:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.827 16:09:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:27.827 16:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.827 16:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:28.396 nvme0n1 00:26:28.396 16:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.396 16:09:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.396 16:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.396 16:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:28.396 16:09:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:28.396 16:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.396 16:09:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.396 16:09:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.396 16:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.396 16:09:07 -- common/autotest_common.sh@10 -- # set +x 00:26:28.396 16:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.396 16:09:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:28.396 16:09:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:28.396 16:09:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:28.396 16:09:07 -- host/auth.sh@44 -- # digest=sha512 00:26:28.396 16:09:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.396 16:09:08 -- host/auth.sh@44 -- # keyid=4 00:26:28.396 16:09:08 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:28.396 16:09:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:28.396 16:09:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:28.396 16:09:08 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI2YzIzNzJjZDZlNjk3NGQ1OGJkNzFmNTRiZjQ1ZjI0NzAzNWRlYjAyMjUyYWRjYzJjNzZmNTFhZTVkNDkyYX9AZH0=: 00:26:28.396 16:09:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:26:28.396 16:09:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:28.396 16:09:08 -- host/auth.sh@68 -- # digest=sha512 00:26:28.396 16:09:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:28.396 16:09:08 -- host/auth.sh@68 -- # keyid=4 00:26:28.396 16:09:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:28.396 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.396 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:28.396 16:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.396 16:09:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:28.396 16:09:08 -- nvmf/common.sh@717 -- # local ip 00:26:28.396 16:09:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:28.396 16:09:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:28.396 16:09:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.396 16:09:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.396 16:09:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:28.396 16:09:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.396 16:09:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:28.396 16:09:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:28.396 16:09:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:28.396 16:09:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.396 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.396 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:28.964 nvme0n1 00:26:28.964 16:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.964 16:09:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.964 16:09:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:28.964 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.964 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:28.964 16:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.964 16:09:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.964 16:09:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.964 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.964 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:29.224 16:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.224 16:09:08 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:29.224 16:09:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:29.224 16:09:08 -- host/auth.sh@44 -- # digest=sha256 00:26:29.224 16:09:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.224 16:09:08 -- host/auth.sh@44 -- # keyid=1 00:26:29.224 16:09:08 -- host/auth.sh@45 -- # key=DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:29.224 16:09:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:29.224 16:09:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:29.224 16:09:08 -- host/auth.sh@49 -- # echo DHHC-1:00:ZmM1NDMwNzNiZjI3NTU2NTk4NTRkMTFlNjYyYTdmMTNmMzc1OTgyM2YxZGZkZDU3Cfbqkw==: 00:26:29.224 16:09:08 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:29.224 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.224 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:29.224 16:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.224 16:09:08 -- host/auth.sh@119 -- # get_main_ns_ip 00:26:29.224 16:09:08 -- nvmf/common.sh@717 -- # local ip 00:26:29.224 16:09:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:29.224 16:09:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:29.224 16:09:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.224 16:09:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.224 16:09:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:29.224 16:09:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.224 16:09:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:29.224 16:09:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:29.224 16:09:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:29.224 16:09:08 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:29.224 16:09:08 -- common/autotest_common.sh@638 -- # local es=0 00:26:29.224 16:09:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:29.224 16:09:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:29.224 16:09:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:29.224 16:09:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:29.224 16:09:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:29.224 16:09:08 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:29.224 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.224 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:29.224 request: 00:26:29.224 { 00:26:29.224 "name": "nvme0", 00:26:29.224 "trtype": "tcp", 00:26:29.224 "traddr": "10.0.0.1", 00:26:29.224 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:29.224 "adrfam": "ipv4", 00:26:29.224 "trsvcid": "4420", 00:26:29.224 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:29.224 "method": "bdev_nvme_attach_controller", 00:26:29.224 "req_id": 1 00:26:29.224 } 00:26:29.224 Got JSON-RPC error response 00:26:29.224 response: 00:26:29.224 { 00:26:29.224 "code": -32602, 00:26:29.224 "message": "Invalid parameters" 00:26:29.224 } 00:26:29.224 16:09:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:29.224 16:09:08 -- common/autotest_common.sh@641 -- # es=1 00:26:29.224 16:09:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:29.224 16:09:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:29.224 16:09:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:29.224 16:09:08 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.224 16:09:08 -- host/auth.sh@121 -- # jq length 00:26:29.224 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.224 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:29.224 16:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.224 16:09:08 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:26:29.224 16:09:08 -- host/auth.sh@124 -- # get_main_ns_ip 00:26:29.224 16:09:08 -- nvmf/common.sh@717 -- # local ip 00:26:29.224 16:09:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:29.224 16:09:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:29.224 16:09:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.224 16:09:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.224 16:09:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:29.224 16:09:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.224 16:09:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:29.224 16:09:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:29.224 16:09:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:29.224 16:09:08 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:29.224 16:09:08 -- common/autotest_common.sh@638 -- # local es=0 00:26:29.224 16:09:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:29.224 16:09:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:29.224 16:09:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:29.224 16:09:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:29.224 16:09:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:29.224 16:09:08 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:29.224 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.224 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:29.224 request: 00:26:29.224 { 00:26:29.224 "name": "nvme0", 00:26:29.224 "trtype": "tcp", 00:26:29.224 "traddr": "10.0.0.1", 00:26:29.224 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:29.224 "adrfam": "ipv4", 00:26:29.224 "trsvcid": "4420", 00:26:29.224 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:29.224 "dhchap_key": "key2", 00:26:29.224 "method": "bdev_nvme_attach_controller", 00:26:29.224 "req_id": 1 00:26:29.224 } 00:26:29.224 Got JSON-RPC error response 00:26:29.224 response: 00:26:29.224 { 00:26:29.224 "code": -32602, 00:26:29.224 "message": "Invalid parameters" 00:26:29.224 } 00:26:29.224 16:09:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:29.224 16:09:08 -- common/autotest_common.sh@641 -- # es=1 00:26:29.224 16:09:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:29.224 16:09:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:29.224 16:09:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:29.224 16:09:08 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.224 16:09:08 -- host/auth.sh@127 -- # jq length 00:26:29.224 16:09:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.224 16:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:29.224 16:09:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.224 16:09:08 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:26:29.224 16:09:08 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:26:29.225 16:09:08 -- host/auth.sh@130 -- # cleanup 00:26:29.225 16:09:08 -- host/auth.sh@24 -- # nvmftestfini 00:26:29.225 16:09:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:29.225 16:09:08 -- nvmf/common.sh@117 -- # sync 00:26:29.484 16:09:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.484 16:09:08 -- nvmf/common.sh@120 -- # set +e 00:26:29.484 16:09:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.484 16:09:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.484 rmmod nvme_tcp 00:26:29.484 rmmod nvme_fabrics 00:26:29.484 16:09:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.484 16:09:08 -- nvmf/common.sh@124 -- # set -e 00:26:29.484 16:09:08 -- nvmf/common.sh@125 -- # return 0 00:26:29.484 16:09:08 -- nvmf/common.sh@478 -- # '[' -n 2573742 ']' 00:26:29.484 16:09:08 -- nvmf/common.sh@479 -- # killprocess 2573742 00:26:29.484 16:09:08 -- common/autotest_common.sh@936 -- # '[' -z 2573742 ']' 00:26:29.484 16:09:08 -- common/autotest_common.sh@940 -- # kill -0 2573742 00:26:29.484 16:09:08 -- common/autotest_common.sh@941 -- # uname 00:26:29.484 16:09:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:29.484 16:09:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2573742 00:26:29.484 16:09:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:29.484 16:09:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:29.484 16:09:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2573742' 00:26:29.484 killing process with pid 2573742 00:26:29.484 16:09:09 -- common/autotest_common.sh@955 -- # kill 2573742 00:26:29.484 16:09:09 -- common/autotest_common.sh@960 -- # wait 2573742 00:26:30.420 16:09:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:30.420 16:09:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:30.420 16:09:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:30.420 16:09:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.420 16:09:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.420 16:09:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.420 16:09:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.420 16:09:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.956 16:09:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:32.956 16:09:12 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:32.956 16:09:12 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:32.956 16:09:12 -- host/auth.sh@27 -- # clean_kernel_target 00:26:32.956 16:09:12 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:32.956 16:09:12 -- nvmf/common.sh@675 -- # echo 0 00:26:32.956 16:09:12 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:32.956 16:09:12 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:32.956 16:09:12 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:32.956 16:09:12 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:32.956 16:09:12 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:26:32.956 16:09:12 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:26:32.956 16:09:12 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:35.489 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:35.489 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:36.057 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:36.315 16:09:15 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ycQ /tmp/spdk.key-null.sU0 /tmp/spdk.key-sha256.j4s /tmp/spdk.key-sha384.Y73 /tmp/spdk.key-sha512.psc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:36.315 16:09:15 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:38.850 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:38.850 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:38.850 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:38.850 00:26:38.850 real 0m49.996s 00:26:38.850 user 0m44.386s 00:26:38.850 sys 0m11.737s 00:26:38.850 16:09:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:38.850 16:09:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.850 ************************************ 00:26:38.850 END TEST nvmf_auth 00:26:38.850 ************************************ 00:26:39.109 16:09:18 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:26:39.109 16:09:18 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:39.109 16:09:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:39.109 16:09:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:39.109 16:09:18 -- common/autotest_common.sh@10 -- # set +x 00:26:39.109 ************************************ 00:26:39.109 START TEST nvmf_digest 00:26:39.109 ************************************ 00:26:39.109 16:09:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:39.109 * Looking for test storage... 00:26:39.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.109 16:09:18 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.109 16:09:18 -- nvmf/common.sh@7 -- # uname -s 00:26:39.369 16:09:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.369 16:09:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.369 16:09:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.369 16:09:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.369 16:09:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.369 16:09:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.369 16:09:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.369 16:09:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.369 16:09:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.369 16:09:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.369 16:09:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:39.369 16:09:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:39.369 16:09:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.369 16:09:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.369 16:09:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.369 16:09:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.369 16:09:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.369 16:09:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.369 16:09:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.369 16:09:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.369 16:09:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.369 16:09:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.369 16:09:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.369 16:09:18 -- paths/export.sh@5 -- # export PATH 00:26:39.369 16:09:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.369 16:09:18 -- nvmf/common.sh@47 -- # : 0 00:26:39.369 16:09:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.369 16:09:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.369 16:09:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.369 16:09:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.369 16:09:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.369 16:09:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.369 16:09:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.369 16:09:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.369 16:09:18 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:39.369 16:09:18 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:39.369 16:09:18 -- host/digest.sh@16 -- # runtime=2 00:26:39.369 16:09:18 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:39.369 16:09:18 -- host/digest.sh@138 -- # nvmftestinit 00:26:39.369 16:09:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:39.369 16:09:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.369 16:09:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:39.369 16:09:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:39.369 16:09:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:39.370 16:09:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.370 16:09:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.370 16:09:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.370 16:09:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:39.370 16:09:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:39.370 16:09:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.370 16:09:18 -- common/autotest_common.sh@10 -- # set +x 00:26:44.655 16:09:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:44.655 16:09:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:44.655 16:09:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:44.655 16:09:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:44.655 16:09:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:44.655 16:09:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:44.655 16:09:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:44.655 16:09:23 -- nvmf/common.sh@295 -- # net_devs=() 00:26:44.655 16:09:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:44.655 16:09:23 -- nvmf/common.sh@296 -- # e810=() 00:26:44.655 16:09:23 -- nvmf/common.sh@296 -- # local -ga e810 00:26:44.655 16:09:23 -- nvmf/common.sh@297 -- # x722=() 00:26:44.655 16:09:23 -- nvmf/common.sh@297 -- # local -ga x722 00:26:44.655 16:09:23 -- nvmf/common.sh@298 -- # mlx=() 00:26:44.655 16:09:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:44.655 16:09:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.655 16:09:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:44.655 16:09:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:44.655 16:09:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:44.655 16:09:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.655 16:09:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:44.655 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:44.655 16:09:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:44.655 16:09:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:44.655 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:44.655 16:09:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:44.655 16:09:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.655 16:09:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.655 16:09:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:44.655 16:09:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.655 16:09:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:44.655 Found net devices under 0000:86:00.0: cvl_0_0 00:26:44.655 16:09:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.655 16:09:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:44.655 16:09:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.655 16:09:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:44.655 16:09:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.655 16:09:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:44.655 Found net devices under 0000:86:00.1: cvl_0_1 00:26:44.655 16:09:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.655 16:09:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:44.655 16:09:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:44.655 16:09:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:44.655 16:09:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.655 16:09:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.655 16:09:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.655 16:09:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:44.655 16:09:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.655 16:09:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.655 16:09:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:44.655 16:09:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.655 16:09:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.655 16:09:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:44.655 16:09:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:44.655 16:09:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.655 16:09:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.655 16:09:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.655 16:09:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.655 16:09:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:44.655 16:09:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.655 16:09:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.655 16:09:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.655 16:09:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:44.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:26:44.655 00:26:44.655 --- 10.0.0.2 ping statistics --- 00:26:44.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.655 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:26:44.655 16:09:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:26:44.655 00:26:44.655 --- 10.0.0.1 ping statistics --- 00:26:44.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.655 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:26:44.655 16:09:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.655 16:09:23 -- nvmf/common.sh@411 -- # return 0 00:26:44.655 16:09:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:44.655 16:09:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.655 16:09:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:44.655 16:09:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.655 16:09:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:44.655 16:09:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:44.655 16:09:23 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:44.655 16:09:23 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:44.655 16:09:23 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:44.656 16:09:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:44.656 16:09:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:44.656 16:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:44.656 ************************************ 00:26:44.656 START TEST nvmf_digest_clean 00:26:44.656 ************************************ 00:26:44.656 16:09:23 -- common/autotest_common.sh@1111 -- # run_digest 00:26:44.656 16:09:23 -- host/digest.sh@120 -- # local dsa_initiator 00:26:44.656 16:09:23 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:44.656 16:09:23 -- host/digest.sh@121 -- # dsa_initiator=false 00:26:44.656 16:09:23 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:44.656 16:09:23 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:44.656 16:09:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:44.656 16:09:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:44.656 16:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:44.656 16:09:23 -- nvmf/common.sh@470 -- # nvmfpid=2587445 00:26:44.656 16:09:23 -- nvmf/common.sh@471 -- # waitforlisten 2587445 00:26:44.656 16:09:23 -- common/autotest_common.sh@817 -- # '[' -z 2587445 ']' 00:26:44.656 16:09:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.656 16:09:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:44.656 16:09:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.656 16:09:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:44.656 16:09:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:44.656 16:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:44.656 [2024-04-26 16:09:24.039960] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:44.656 [2024-04-26 16:09:24.040048] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.656 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.656 [2024-04-26 16:09:24.148212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.915 [2024-04-26 16:09:24.370851] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.915 [2024-04-26 16:09:24.370892] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.915 [2024-04-26 16:09:24.370902] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.915 [2024-04-26 16:09:24.370913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.915 [2024-04-26 16:09:24.370923] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.915 [2024-04-26 16:09:24.370955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.174 16:09:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:45.174 16:09:24 -- common/autotest_common.sh@850 -- # return 0 00:26:45.174 16:09:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:45.174 16:09:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:45.174 16:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:45.174 16:09:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.174 16:09:24 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:45.174 16:09:24 -- host/digest.sh@126 -- # common_target_config 00:26:45.174 16:09:24 -- host/digest.sh@43 -- # rpc_cmd 00:26:45.174 16:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.174 16:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:45.744 null0 00:26:45.744 [2024-04-26 16:09:25.213157] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.744 [2024-04-26 16:09:25.237390] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.744 16:09:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.744 16:09:25 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:45.744 16:09:25 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:45.744 16:09:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:45.744 16:09:25 -- host/digest.sh@80 -- # rw=randread 00:26:45.744 16:09:25 -- host/digest.sh@80 -- # bs=4096 00:26:45.744 16:09:25 -- host/digest.sh@80 -- # qd=128 00:26:45.744 16:09:25 -- host/digest.sh@80 -- # scan_dsa=false 00:26:45.744 16:09:25 -- host/digest.sh@83 -- # bperfpid=2587693 00:26:45.744 16:09:25 -- host/digest.sh@84 -- # waitforlisten 2587693 /var/tmp/bperf.sock 00:26:45.744 16:09:25 -- common/autotest_common.sh@817 -- # '[' -z 2587693 ']' 00:26:45.744 16:09:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.744 16:09:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:45.744 16:09:25 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:45.744 16:09:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.744 16:09:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:45.744 16:09:25 -- common/autotest_common.sh@10 -- # set +x 00:26:45.744 [2024-04-26 16:09:25.310830] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:45.744 [2024-04-26 16:09:25.310927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587693 ] 00:26:45.744 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.744 [2024-04-26 16:09:25.414891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.003 [2024-04-26 16:09:25.645440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.573 16:09:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:46.573 16:09:26 -- common/autotest_common.sh@850 -- # return 0 00:26:46.573 16:09:26 -- host/digest.sh@86 -- # false 00:26:46.573 16:09:26 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:46.573 16:09:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:47.145 16:09:26 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.145 16:09:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.404 nvme0n1 00:26:47.404 16:09:27 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:47.404 16:09:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.663 Running I/O for 2 seconds... 00:26:49.567 00:26:49.567 Latency(us) 00:26:49.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.567 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:49.567 nvme0n1 : 2.00 22852.42 89.27 0.00 0.00 5594.52 2835.14 18122.13 00:26:49.567 =================================================================================================================== 00:26:49.567 Total : 22852.42 89.27 0.00 0.00 5594.52 2835.14 18122.13 00:26:49.567 0 00:26:49.567 16:09:29 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:49.567 16:09:29 -- host/digest.sh@93 -- # get_accel_stats 00:26:49.567 16:09:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:49.567 16:09:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:49.567 | select(.opcode=="crc32c") 00:26:49.567 | "\(.module_name) \(.executed)"' 00:26:49.567 16:09:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:49.825 16:09:29 -- host/digest.sh@94 -- # false 00:26:49.825 16:09:29 -- host/digest.sh@94 -- # exp_module=software 00:26:49.826 16:09:29 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:49.826 16:09:29 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.826 16:09:29 -- host/digest.sh@98 -- # killprocess 2587693 00:26:49.826 16:09:29 -- common/autotest_common.sh@936 -- # '[' -z 2587693 ']' 00:26:49.826 16:09:29 -- common/autotest_common.sh@940 -- # kill -0 2587693 00:26:49.826 16:09:29 -- common/autotest_common.sh@941 -- # uname 00:26:49.826 16:09:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:49.826 16:09:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2587693 00:26:49.826 16:09:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:49.826 16:09:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:49.826 16:09:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2587693' 00:26:49.826 killing process with pid 2587693 00:26:49.826 16:09:29 -- common/autotest_common.sh@955 -- # kill 2587693 00:26:49.826 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.826 00:26:49.826 Latency(us) 00:26:49.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.826 =================================================================================================================== 00:26:49.826 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.826 16:09:29 -- common/autotest_common.sh@960 -- # wait 2587693 00:26:50.761 16:09:30 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:50.761 16:09:30 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:50.761 16:09:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:50.761 16:09:30 -- host/digest.sh@80 -- # rw=randread 00:26:50.761 16:09:30 -- host/digest.sh@80 -- # bs=131072 00:26:50.761 16:09:30 -- host/digest.sh@80 -- # qd=16 00:26:50.761 16:09:30 -- host/digest.sh@80 -- # scan_dsa=false 00:26:50.761 16:09:30 -- host/digest.sh@83 -- # bperfpid=2588613 00:26:50.761 16:09:30 -- host/digest.sh@84 -- # waitforlisten 2588613 /var/tmp/bperf.sock 00:26:50.761 16:09:30 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:50.761 16:09:30 -- common/autotest_common.sh@817 -- # '[' -z 2588613 ']' 00:26:50.761 16:09:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:50.761 16:09:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:50.761 16:09:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:50.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:50.761 16:09:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:50.761 16:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:51.020 [2024-04-26 16:09:30.465288] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:51.020 [2024-04-26 16:09:30.465381] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588613 ] 00:26:51.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.020 Zero copy mechanism will not be used. 00:26:51.020 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.020 [2024-04-26 16:09:30.568429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.279 [2024-04-26 16:09:30.794551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.847 16:09:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:51.847 16:09:31 -- common/autotest_common.sh@850 -- # return 0 00:26:51.847 16:09:31 -- host/digest.sh@86 -- # false 00:26:51.847 16:09:31 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:51.847 16:09:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:52.106 16:09:31 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.106 16:09:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.364 nvme0n1 00:26:52.364 16:09:32 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:52.364 16:09:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:52.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.623 Zero copy mechanism will not be used. 00:26:52.623 Running I/O for 2 seconds... 00:26:54.647 00:26:54.647 Latency(us) 00:26:54.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.647 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:54.647 nvme0n1 : 2.00 2677.93 334.74 0.00 0.00 5971.70 5271.37 18578.03 00:26:54.647 =================================================================================================================== 00:26:54.647 Total : 2677.93 334.74 0.00 0.00 5971.70 5271.37 18578.03 00:26:54.647 0 00:26:54.647 16:09:34 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:54.647 16:09:34 -- host/digest.sh@93 -- # get_accel_stats 00:26:54.647 16:09:34 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:54.647 16:09:34 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:54.647 | select(.opcode=="crc32c") 00:26:54.647 | "\(.module_name) \(.executed)"' 00:26:54.647 16:09:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:54.647 16:09:34 -- host/digest.sh@94 -- # false 00:26:54.647 16:09:34 -- host/digest.sh@94 -- # exp_module=software 00:26:54.647 16:09:34 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:54.647 16:09:34 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:54.647 16:09:34 -- host/digest.sh@98 -- # killprocess 2588613 00:26:54.647 16:09:34 -- common/autotest_common.sh@936 -- # '[' -z 2588613 ']' 00:26:54.647 16:09:34 -- common/autotest_common.sh@940 -- # kill -0 2588613 00:26:54.647 16:09:34 -- common/autotest_common.sh@941 -- # uname 00:26:54.647 16:09:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:54.647 16:09:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2588613 00:26:54.906 16:09:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:54.906 16:09:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:54.906 16:09:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2588613' 00:26:54.906 killing process with pid 2588613 00:26:54.906 16:09:34 -- common/autotest_common.sh@955 -- # kill 2588613 00:26:54.906 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.906 00:26:54.906 Latency(us) 00:26:54.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.906 =================================================================================================================== 00:26:54.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.906 16:09:34 -- common/autotest_common.sh@960 -- # wait 2588613 00:26:55.842 16:09:35 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:55.842 16:09:35 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:55.842 16:09:35 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:55.842 16:09:35 -- host/digest.sh@80 -- # rw=randwrite 00:26:55.842 16:09:35 -- host/digest.sh@80 -- # bs=4096 00:26:55.842 16:09:35 -- host/digest.sh@80 -- # qd=128 00:26:55.842 16:09:35 -- host/digest.sh@80 -- # scan_dsa=false 00:26:55.842 16:09:35 -- host/digest.sh@83 -- # bperfpid=2589322 00:26:55.842 16:09:35 -- host/digest.sh@84 -- # waitforlisten 2589322 /var/tmp/bperf.sock 00:26:55.842 16:09:35 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:55.842 16:09:35 -- common/autotest_common.sh@817 -- # '[' -z 2589322 ']' 00:26:55.842 16:09:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:55.842 16:09:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:55.842 16:09:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:55.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:55.842 16:09:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:55.842 16:09:35 -- common/autotest_common.sh@10 -- # set +x 00:26:55.842 [2024-04-26 16:09:35.442183] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:55.842 [2024-04-26 16:09:35.442294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589322 ] 00:26:55.842 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.101 [2024-04-26 16:09:35.545515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.101 [2024-04-26 16:09:35.770186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.675 16:09:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:56.676 16:09:36 -- common/autotest_common.sh@850 -- # return 0 00:26:56.676 16:09:36 -- host/digest.sh@86 -- # false 00:26:56.676 16:09:36 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:56.676 16:09:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:57.250 16:09:36 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.250 16:09:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.509 nvme0n1 00:26:57.509 16:09:37 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:57.509 16:09:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:57.767 Running I/O for 2 seconds... 00:26:59.671 00:26:59.671 Latency(us) 00:26:59.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.671 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:59.671 nvme0n1 : 2.01 22263.02 86.96 0.00 0.00 5738.48 4872.46 19831.76 00:26:59.671 =================================================================================================================== 00:26:59.671 Total : 22263.02 86.96 0.00 0.00 5738.48 4872.46 19831.76 00:26:59.671 0 00:26:59.671 16:09:39 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:59.671 16:09:39 -- host/digest.sh@93 -- # get_accel_stats 00:26:59.671 16:09:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:59.671 16:09:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:59.671 16:09:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:59.671 | select(.opcode=="crc32c") 00:26:59.671 | "\(.module_name) \(.executed)"' 00:26:59.931 16:09:39 -- host/digest.sh@94 -- # false 00:26:59.931 16:09:39 -- host/digest.sh@94 -- # exp_module=software 00:26:59.931 16:09:39 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:59.931 16:09:39 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:59.931 16:09:39 -- host/digest.sh@98 -- # killprocess 2589322 00:26:59.931 16:09:39 -- common/autotest_common.sh@936 -- # '[' -z 2589322 ']' 00:26:59.931 16:09:39 -- common/autotest_common.sh@940 -- # kill -0 2589322 00:26:59.931 16:09:39 -- common/autotest_common.sh@941 -- # uname 00:26:59.931 16:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:59.931 16:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2589322 00:26:59.931 16:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:59.931 16:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:59.931 16:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2589322' 00:26:59.931 killing process with pid 2589322 00:26:59.931 16:09:39 -- common/autotest_common.sh@955 -- # kill 2589322 00:26:59.931 Received shutdown signal, test time was about 2.000000 seconds 00:26:59.931 00:26:59.931 Latency(us) 00:26:59.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.931 =================================================================================================================== 00:26:59.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:59.931 16:09:39 -- common/autotest_common.sh@960 -- # wait 2589322 00:27:00.868 16:09:40 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:00.868 16:09:40 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:00.868 16:09:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:00.868 16:09:40 -- host/digest.sh@80 -- # rw=randwrite 00:27:00.868 16:09:40 -- host/digest.sh@80 -- # bs=131072 00:27:00.868 16:09:40 -- host/digest.sh@80 -- # qd=16 00:27:00.868 16:09:40 -- host/digest.sh@80 -- # scan_dsa=false 00:27:00.868 16:09:40 -- host/digest.sh@83 -- # bperfpid=2590246 00:27:00.868 16:09:40 -- host/digest.sh@84 -- # waitforlisten 2590246 /var/tmp/bperf.sock 00:27:00.868 16:09:40 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:00.868 16:09:40 -- common/autotest_common.sh@817 -- # '[' -z 2590246 ']' 00:27:00.868 16:09:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.868 16:09:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:00.868 16:09:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.868 16:09:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:00.868 16:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:01.127 [2024-04-26 16:09:40.583263] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:01.127 [2024-04-26 16:09:40.583356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590246 ] 00:27:01.127 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:01.127 Zero copy mechanism will not be used. 00:27:01.127 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.127 [2024-04-26 16:09:40.685683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.387 [2024-04-26 16:09:40.912060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.956 16:09:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:01.956 16:09:41 -- common/autotest_common.sh@850 -- # return 0 00:27:01.956 16:09:41 -- host/digest.sh@86 -- # false 00:27:01.956 16:09:41 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:01.956 16:09:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:02.525 16:09:41 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.525 16:09:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.525 nvme0n1 00:27:02.525 16:09:42 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:02.525 16:09:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:02.525 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:02.525 Zero copy mechanism will not be used. 00:27:02.525 Running I/O for 2 seconds... 00:27:05.061 00:27:05.061 Latency(us) 00:27:05.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.061 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:05.061 nvme0n1 : 2.01 2235.12 279.39 0.00 0.00 7144.31 5157.40 22795.13 00:27:05.061 =================================================================================================================== 00:27:05.061 Total : 2235.12 279.39 0.00 0.00 7144.31 5157.40 22795.13 00:27:05.061 0 00:27:05.061 16:09:44 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:05.061 16:09:44 -- host/digest.sh@93 -- # get_accel_stats 00:27:05.061 16:09:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:05.061 16:09:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:05.061 | select(.opcode=="crc32c") 00:27:05.061 | "\(.module_name) \(.executed)"' 00:27:05.061 16:09:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:05.061 16:09:44 -- host/digest.sh@94 -- # false 00:27:05.061 16:09:44 -- host/digest.sh@94 -- # exp_module=software 00:27:05.061 16:09:44 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:05.061 16:09:44 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:05.061 16:09:44 -- host/digest.sh@98 -- # killprocess 2590246 00:27:05.061 16:09:44 -- common/autotest_common.sh@936 -- # '[' -z 2590246 ']' 00:27:05.061 16:09:44 -- common/autotest_common.sh@940 -- # kill -0 2590246 00:27:05.061 16:09:44 -- common/autotest_common.sh@941 -- # uname 00:27:05.061 16:09:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:05.061 16:09:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2590246 00:27:05.061 16:09:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:05.061 16:09:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:05.061 16:09:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2590246' 00:27:05.061 killing process with pid 2590246 00:27:05.061 16:09:44 -- common/autotest_common.sh@955 -- # kill 2590246 00:27:05.061 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.061 00:27:05.061 Latency(us) 00:27:05.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.061 =================================================================================================================== 00:27:05.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.061 16:09:44 -- common/autotest_common.sh@960 -- # wait 2590246 00:27:06.000 16:09:45 -- host/digest.sh@132 -- # killprocess 2587445 00:27:06.000 16:09:45 -- common/autotest_common.sh@936 -- # '[' -z 2587445 ']' 00:27:06.000 16:09:45 -- common/autotest_common.sh@940 -- # kill -0 2587445 00:27:06.000 16:09:45 -- common/autotest_common.sh@941 -- # uname 00:27:06.000 16:09:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:06.000 16:09:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2587445 00:27:06.000 16:09:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:06.000 16:09:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:06.000 16:09:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2587445' 00:27:06.000 killing process with pid 2587445 00:27:06.000 16:09:45 -- common/autotest_common.sh@955 -- # kill 2587445 00:27:06.000 16:09:45 -- common/autotest_common.sh@960 -- # wait 2587445 00:27:07.380 00:27:07.380 real 0m22.816s 00:27:07.380 user 0m43.225s 00:27:07.380 sys 0m4.023s 00:27:07.380 16:09:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:07.380 16:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:07.380 ************************************ 00:27:07.380 END TEST nvmf_digest_clean 00:27:07.380 ************************************ 00:27:07.380 16:09:46 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:07.380 16:09:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:07.380 16:09:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:07.380 16:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:07.380 ************************************ 00:27:07.380 START TEST nvmf_digest_error 00:27:07.380 ************************************ 00:27:07.380 16:09:46 -- common/autotest_common.sh@1111 -- # run_digest_error 00:27:07.380 16:09:46 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:07.380 16:09:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:07.380 16:09:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:07.380 16:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:07.380 16:09:46 -- nvmf/common.sh@470 -- # nvmfpid=2591215 00:27:07.380 16:09:46 -- nvmf/common.sh@471 -- # waitforlisten 2591215 00:27:07.380 16:09:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:07.380 16:09:46 -- common/autotest_common.sh@817 -- # '[' -z 2591215 ']' 00:27:07.380 16:09:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.380 16:09:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:07.380 16:09:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.380 16:09:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:07.380 16:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:07.380 [2024-04-26 16:09:47.031522] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:07.380 [2024-04-26 16:09:47.031609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.640 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.640 [2024-04-26 16:09:47.139251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.898 [2024-04-26 16:09:47.353327] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.898 [2024-04-26 16:09:47.353372] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.898 [2024-04-26 16:09:47.353382] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.898 [2024-04-26 16:09:47.353408] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.898 [2024-04-26 16:09:47.353418] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.898 [2024-04-26 16:09:47.353446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.157 16:09:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:08.157 16:09:47 -- common/autotest_common.sh@850 -- # return 0 00:27:08.157 16:09:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:08.157 16:09:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:08.157 16:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:08.157 16:09:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.157 16:09:47 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:08.157 16:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.157 16:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:08.157 [2024-04-26 16:09:47.831159] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:08.157 16:09:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.157 16:09:47 -- host/digest.sh@105 -- # common_target_config 00:27:08.157 16:09:47 -- host/digest.sh@43 -- # rpc_cmd 00:27:08.157 16:09:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.157 16:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:08.725 null0 00:27:08.725 [2024-04-26 16:09:48.206657] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.725 [2024-04-26 16:09:48.230855] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.725 16:09:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.725 16:09:48 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:08.725 16:09:48 -- host/digest.sh@54 -- # local rw bs qd 00:27:08.725 16:09:48 -- host/digest.sh@56 -- # rw=randread 00:27:08.725 16:09:48 -- host/digest.sh@56 -- # bs=4096 00:27:08.725 16:09:48 -- host/digest.sh@56 -- # qd=128 00:27:08.725 16:09:48 -- host/digest.sh@58 -- # bperfpid=2591463 00:27:08.725 16:09:48 -- host/digest.sh@60 -- # waitforlisten 2591463 /var/tmp/bperf.sock 00:27:08.725 16:09:48 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:08.725 16:09:48 -- common/autotest_common.sh@817 -- # '[' -z 2591463 ']' 00:27:08.725 16:09:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:08.725 16:09:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:08.725 16:09:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:08.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:08.725 16:09:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:08.725 16:09:48 -- common/autotest_common.sh@10 -- # set +x 00:27:08.725 [2024-04-26 16:09:48.307550] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:08.725 [2024-04-26 16:09:48.307635] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591463 ] 00:27:08.725 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.984 [2024-04-26 16:09:48.411305] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.984 [2024-04-26 16:09:48.636875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.553 16:09:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:09.553 16:09:49 -- common/autotest_common.sh@850 -- # return 0 00:27:09.553 16:09:49 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.553 16:09:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.812 16:09:49 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:09.812 16:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.812 16:09:49 -- common/autotest_common.sh@10 -- # set +x 00:27:09.812 16:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.812 16:09:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.812 16:09:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.812 nvme0n1 00:27:09.812 16:09:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:09.812 16:09:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.812 16:09:49 -- common/autotest_common.sh@10 -- # set +x 00:27:10.071 16:09:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.071 16:09:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:10.071 16:09:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:10.071 Running I/O for 2 seconds... 00:27:10.071 [2024-04-26 16:09:49.613336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.071 [2024-04-26 16:09:49.613387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.613404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.624436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.624472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.624486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.635218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.635250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.635263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.646872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.646900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.646913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.656634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.656662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.656674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.668601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.668628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.668641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.680800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.680828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.680851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.691413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.691441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.691454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.702423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.702450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.702462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.713353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.713380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.713392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.724444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.724470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.724485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.735558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.735583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.735595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.072 [2024-04-26 16:09:49.747525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.072 [2024-04-26 16:09:49.747551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.072 [2024-04-26 16:09:49.747563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.759206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.759233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.759244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.769993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.770019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.770031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.782177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.782203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.782215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.791507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.791533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.791545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.803508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.803535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.803547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.814100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.814127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.814140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.826032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.826063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.826091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.837259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.837286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.837297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.849520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.849549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.849561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.859460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.859487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.859498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.872158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.872187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.872199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.882505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.882532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.882544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.895207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.895234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.895246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.905245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.905272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.905283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.916248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.916273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.916289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.928487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.928515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.928527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.939935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.939962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.939974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.949514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.949540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.949552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.332 [2024-04-26 16:09:49.962143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.332 [2024-04-26 16:09:49.962169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.332 [2024-04-26 16:09:49.962181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.333 [2024-04-26 16:09:49.973121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.333 [2024-04-26 16:09:49.973148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.333 [2024-04-26 16:09:49.973159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.333 [2024-04-26 16:09:49.984206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.333 [2024-04-26 16:09:49.984232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.333 [2024-04-26 16:09:49.984243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.333 [2024-04-26 16:09:49.995226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.333 [2024-04-26 16:09:49.995252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.333 [2024-04-26 16:09:49.995264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.333 [2024-04-26 16:09:50.007361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.333 [2024-04-26 16:09:50.007391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.333 [2024-04-26 16:09:50.007403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.018474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.018511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.018524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.031221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.031252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.031265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.042698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.042728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.042742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.052735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.052763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.052776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.066953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.066981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.066993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.076776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.076804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.076817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.089640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.089669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.089681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.101405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.101432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.101445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.114217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.114245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.114262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.125446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.125473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.125485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.135779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.135806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.135818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.147679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.147705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.147717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.158151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.158177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.158189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.170234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.170271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.170283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.181451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.181478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.181490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.191353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.191379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.191392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.203420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.203447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.203459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.213133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.213162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.213173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.226266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.226301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.226313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.238386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.238412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.238424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.250767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.250795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.250806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.592 [2024-04-26 16:09:50.265682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.592 [2024-04-26 16:09:50.265708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.592 [2024-04-26 16:09:50.265720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.276559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.276586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.276599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.288067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.288098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.288109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.300249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.300275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.300287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.311007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.311034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.311049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.321228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.321254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.321265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.335220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.335246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.335258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.345813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.345839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.345850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.360195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.360221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.360232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.371030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.371056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.371068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.383064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.383096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.383108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.393493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.393519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.393531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.406805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.406832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.406844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.418176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.418205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.418217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.431431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.431457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.431469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.442799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.442825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.442836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.454227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.454253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.454265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.467104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.467130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.467141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.476894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.476919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.476931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.492819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.492846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.492858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.503124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.503150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.503161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.515015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.515041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.515053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.853 [2024-04-26 16:09:50.527886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:10.853 [2024-04-26 16:09:50.527910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.853 [2024-04-26 16:09:50.527922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.538043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.538068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.538086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.550755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.550781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.550793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.562272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.562299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.562311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.573382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.573407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.573418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.586813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.586839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.586851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.597066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.597101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.597112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.608510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.608536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.608548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.623616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.623646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.623658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.634123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.113 [2024-04-26 16:09:50.634149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.113 [2024-04-26 16:09:50.634161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.113 [2024-04-26 16:09:50.644657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.644684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.644696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.656737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.656764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.656775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.668702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.668728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.668740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.680900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.680926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.680938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.692141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.692168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.692180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.703621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.703647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.703659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.714566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.714592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.714604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.726770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.726796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.726808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.737479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.737505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.737517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.748441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.748467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.748479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.760892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.760918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.773197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.773222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.773234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.114 [2024-04-26 16:09:50.784226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.114 [2024-04-26 16:09:50.784252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.114 [2024-04-26 16:09:50.784263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.374 [2024-04-26 16:09:50.795607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.374 [2024-04-26 16:09:50.795634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.374 [2024-04-26 16:09:50.795646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.374 [2024-04-26 16:09:50.807830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.374 [2024-04-26 16:09:50.807857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.374 [2024-04-26 16:09:50.807868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.374 [2024-04-26 16:09:50.819025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.374 [2024-04-26 16:09:50.819055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.374 [2024-04-26 16:09:50.819067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.374 [2024-04-26 16:09:50.830443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.374 [2024-04-26 16:09:50.830469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.374 [2024-04-26 16:09:50.830481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.841432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.841458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.841469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.853931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.853958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.853970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.864915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.864942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.864953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.875477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.875504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.875516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.886783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.886810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.886822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.898560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.898586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.898597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.910962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.910988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.911000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.921277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.921303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.921314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.933084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.933113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.933125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.943836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.943862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.943874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.957998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.958024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.958037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.968254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.968281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.968293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.979258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.979284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.979295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:50.991417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:50.991443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:50.991455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:51.002696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:51.002722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:51.002734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:51.013756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:51.013786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:51.013798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:51.025076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:51.025103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:51.025115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:51.036394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:51.036422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:51.036434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.375 [2024-04-26 16:09:51.046097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.375 [2024-04-26 16:09:51.046125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.375 [2024-04-26 16:09:51.046137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.059229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.059256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.059268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.070235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.070262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.070273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.081209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.081236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.081248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.094329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.094356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.094368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.103846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.103873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.103884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.116621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.116648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.116660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.126596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.126622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.126634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.137864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.137892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.137903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.149762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.149789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.149800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.160149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.160175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.160211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.171349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.171375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.171386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.183016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.183042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.183053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.194131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.194158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.194170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.204266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.204297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.204309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.216055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.216087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.216099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.227882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.227908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.227920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.239486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.239513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.239525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.249535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.249562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.249573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.261280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.261306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.261318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.271875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.271902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.271913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.284059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.284093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.284105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.293901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.293927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.293939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.634 [2024-04-26 16:09:51.307111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.634 [2024-04-26 16:09:51.307137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.634 [2024-04-26 16:09:51.307149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.894 [2024-04-26 16:09:51.316679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.894 [2024-04-26 16:09:51.316706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.894 [2024-04-26 16:09:51.316719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.894 [2024-04-26 16:09:51.329044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.894 [2024-04-26 16:09:51.329076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.894 [2024-04-26 16:09:51.329089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.894 [2024-04-26 16:09:51.339435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.894 [2024-04-26 16:09:51.339462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.894 [2024-04-26 16:09:51.339474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.894 [2024-04-26 16:09:51.352330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.894 [2024-04-26 16:09:51.352356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.894 [2024-04-26 16:09:51.352368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.894 [2024-04-26 16:09:51.361919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.894 [2024-04-26 16:09:51.361945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.894 [2024-04-26 16:09:51.361957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.894 [2024-04-26 16:09:51.374335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.894 [2024-04-26 16:09:51.374362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.894 [2024-04-26 16:09:51.374373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.894 [2024-04-26 16:09:51.385713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.894 [2024-04-26 16:09:51.385739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.894 [2024-04-26 16:09:51.385751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.396054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.396089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.396105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.407579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.407605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.407618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.419389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.419416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.419427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.429925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.429951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.429962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.442398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.442425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.442438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.451635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.451662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.451673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.463385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.463411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.463423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.474703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.474729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.474740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.486078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.486105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.486117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.498559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.498586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.498598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.507891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.507917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.507929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.520270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.520296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.520308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.531419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.531445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.531457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.542373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.542398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.542410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.552222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.552248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.552259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.564277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.564303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.564315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.895 [2024-04-26 16:09:51.575206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:11.895 [2024-04-26 16:09:51.575233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.895 [2024-04-26 16:09:51.575245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.155 [2024-04-26 16:09:51.587658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:12.155 [2024-04-26 16:09:51.587685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.155 [2024-04-26 16:09:51.587701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.155 00:27:12.155 Latency(us) 00:27:12.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.155 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:12.155 nvme0n1 : 2.00 22135.98 86.47 0.00 0.00 5774.80 2835.14 19831.76 00:27:12.155 =================================================================================================================== 00:27:12.155 Total : 22135.98 86.47 0.00 0.00 5774.80 2835.14 19831.76 00:27:12.155 0 00:27:12.155 16:09:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:12.155 16:09:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:12.155 16:09:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:12.155 | .driver_specific 00:27:12.155 | .nvme_error 00:27:12.155 | .status_code 00:27:12.155 | .command_transient_transport_error' 00:27:12.155 16:09:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:12.155 16:09:51 -- host/digest.sh@71 -- # (( 173 > 0 )) 00:27:12.155 16:09:51 -- host/digest.sh@73 -- # killprocess 2591463 00:27:12.155 16:09:51 -- common/autotest_common.sh@936 -- # '[' -z 2591463 ']' 00:27:12.155 16:09:51 -- common/autotest_common.sh@940 -- # kill -0 2591463 00:27:12.155 16:09:51 -- common/autotest_common.sh@941 -- # uname 00:27:12.155 16:09:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:12.155 16:09:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2591463 00:27:12.414 16:09:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:12.414 16:09:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:12.414 16:09:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2591463' 00:27:12.414 killing process with pid 2591463 00:27:12.414 16:09:51 -- common/autotest_common.sh@955 -- # kill 2591463 00:27:12.414 Received shutdown signal, test time was about 2.000000 seconds 00:27:12.414 00:27:12.414 Latency(us) 00:27:12.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.414 =================================================================================================================== 00:27:12.414 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.414 16:09:51 -- common/autotest_common.sh@960 -- # wait 2591463 00:27:13.351 16:09:52 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:13.351 16:09:52 -- host/digest.sh@54 -- # local rw bs qd 00:27:13.351 16:09:52 -- host/digest.sh@56 -- # rw=randread 00:27:13.351 16:09:52 -- host/digest.sh@56 -- # bs=131072 00:27:13.351 16:09:52 -- host/digest.sh@56 -- # qd=16 00:27:13.351 16:09:52 -- host/digest.sh@58 -- # bperfpid=2592257 00:27:13.351 16:09:52 -- host/digest.sh@60 -- # waitforlisten 2592257 /var/tmp/bperf.sock 00:27:13.351 16:09:52 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:13.351 16:09:52 -- common/autotest_common.sh@817 -- # '[' -z 2592257 ']' 00:27:13.351 16:09:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:13.351 16:09:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:13.351 16:09:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:13.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:13.351 16:09:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:13.351 16:09:52 -- common/autotest_common.sh@10 -- # set +x 00:27:13.351 [2024-04-26 16:09:52.943587] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:13.351 [2024-04-26 16:09:52.943680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592257 ] 00:27:13.351 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:13.351 Zero copy mechanism will not be used. 00:27:13.351 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.610 [2024-04-26 16:09:53.049683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.610 [2024-04-26 16:09:53.275717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.178 16:09:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:14.178 16:09:53 -- common/autotest_common.sh@850 -- # return 0 00:27:14.178 16:09:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.178 16:09:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.436 16:09:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:14.436 16:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.436 16:09:53 -- common/autotest_common.sh@10 -- # set +x 00:27:14.436 16:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.436 16:09:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.436 16:09:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.695 nvme0n1 00:27:14.695 16:09:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:14.695 16:09:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.695 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:27:14.695 16:09:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.695 16:09:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:14.695 16:09:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:14.695 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.695 Zero copy mechanism will not be used. 00:27:14.695 Running I/O for 2 seconds... 00:27:14.695 [2024-04-26 16:09:54.251775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.695 [2024-04-26 16:09:54.251825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.695 [2024-04-26 16:09:54.251844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.695 [2024-04-26 16:09:54.265372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.695 [2024-04-26 16:09:54.265409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.695 [2024-04-26 16:09:54.265424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.695 [2024-04-26 16:09:54.276948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.695 [2024-04-26 16:09:54.276980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.695 [2024-04-26 16:09:54.276994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.695 [2024-04-26 16:09:54.288004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.695 [2024-04-26 16:09:54.288034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.695 [2024-04-26 16:09:54.288048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.695 [2024-04-26 16:09:54.298893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.695 [2024-04-26 16:09:54.298925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.695 [2024-04-26 16:09:54.298938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.695 [2024-04-26 16:09:54.309795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.696 [2024-04-26 16:09:54.309825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.696 [2024-04-26 16:09:54.309838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.696 [2024-04-26 16:09:54.320727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.696 [2024-04-26 16:09:54.320755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.696 [2024-04-26 16:09:54.320779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.696 [2024-04-26 16:09:54.331612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.696 [2024-04-26 16:09:54.331640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.696 [2024-04-26 16:09:54.331651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.696 [2024-04-26 16:09:54.342513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.696 [2024-04-26 16:09:54.342540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.696 [2024-04-26 16:09:54.342552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.696 [2024-04-26 16:09:54.353365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.696 [2024-04-26 16:09:54.353392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.696 [2024-04-26 16:09:54.353404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.696 [2024-04-26 16:09:54.364562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.696 [2024-04-26 16:09:54.364589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.696 [2024-04-26 16:09:54.364601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.696 [2024-04-26 16:09:54.375604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.696 [2024-04-26 16:09:54.375631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.696 [2024-04-26 16:09:54.375643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.386625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.386651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.386667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.397613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.397640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.397652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.408612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.408638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.408650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.419860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.419886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.419898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.431008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.431033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.431045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.442035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.442061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.442079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.452838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.452865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.452876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.463680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.463705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.463716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.474626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.474652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.474664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.485675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.485701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.485713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.496737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.496763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.496775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.507725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.507751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.507763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.518666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.518695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.518707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.530831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.530857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.530869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.542095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.542121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.542133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.553632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.553658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.553669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.564541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.564566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.564577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.583829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.583854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.583869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.605335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.605361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.605373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.956 [2024-04-26 16:09:54.624093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:14.956 [2024-04-26 16:09:54.624119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.956 [2024-04-26 16:09:54.624130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.638720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.638748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.638761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.658717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.658745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.658757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.675556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.675583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.675596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.689484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.689517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.689529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.701088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.701114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.701125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.712299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.712325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.712337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.724302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.724328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.724340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.735554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.735581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.735592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.216 [2024-04-26 16:09:54.747247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.216 [2024-04-26 16:09:54.747273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.216 [2024-04-26 16:09:54.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.217 [2024-04-26 16:09:54.766241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.217 [2024-04-26 16:09:54.766268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.217 [2024-04-26 16:09:54.766280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.217 [2024-04-26 16:09:54.781470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.217 [2024-04-26 16:09:54.781497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.217 [2024-04-26 16:09:54.781509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.217 [2024-04-26 16:09:54.794879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.217 [2024-04-26 16:09:54.794907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.217 [2024-04-26 16:09:54.794919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.217 [2024-04-26 16:09:54.806274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.217 [2024-04-26 16:09:54.806300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.217 [2024-04-26 16:09:54.806311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.217 [2024-04-26 16:09:54.817562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.217 [2024-04-26 16:09:54.817588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.217 [2024-04-26 16:09:54.817600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.217 [2024-04-26 16:09:54.838583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.217 [2024-04-26 16:09:54.838610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.217 [2024-04-26 16:09:54.838625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.217 [2024-04-26 16:09:54.858234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.217 [2024-04-26 16:09:54.858260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.217 [2024-04-26 16:09:54.858272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.217 [2024-04-26 16:09:54.879359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.217 [2024-04-26 16:09:54.879385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.217 [2024-04-26 16:09:54.879397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:54.900081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:54.900118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:54.900130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:54.919749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:54.919775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:54.919787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:54.935155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:54.935181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:54.935193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:54.946932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:54.946957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:54.946968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:54.957991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:54.958017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:54.958028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:54.970014] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:54.970039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:54.970051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:54.990268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:54.990294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:54.990305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.008644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.008672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.008684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.022806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.022833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.022845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.034082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.034109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.034120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.051246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.051272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.051285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.066729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.066755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.066766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.079083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.079108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.079120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.090966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.090991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.091002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.110239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.110266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.110282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.123380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.123406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.123417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.135149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.135174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.135186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-04-26 16:09:55.152511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.477 [2024-04-26 16:09:55.152537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-04-26 16:09:55.152548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.167762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.167790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.167802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.180202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.180229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.180240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.194595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.194623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.194635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.206573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.206600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.206612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.223368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.223395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.223408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.238354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.238382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.238394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.251769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.251798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.251811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.263872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.263899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.263912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.283560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.737 [2024-04-26 16:09:55.283586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.737 [2024-04-26 16:09:55.283599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.737 [2024-04-26 16:09:55.299001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.299028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.299041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.310839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.310868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.310880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.322765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.322791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.322803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.334491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.334516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.334528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.354322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.354348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.354364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.369161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.369187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.369199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.380716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.380743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.380755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.391642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.391669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.391680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.402546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.402572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.402583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.738 [2024-04-26 16:09:55.413469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.738 [2024-04-26 16:09:55.413494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-04-26 16:09:55.413506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.997 [2024-04-26 16:09:55.424470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.997 [2024-04-26 16:09:55.424496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.997 [2024-04-26 16:09:55.424509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.997 [2024-04-26 16:09:55.435349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.997 [2024-04-26 16:09:55.435374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.997 [2024-04-26 16:09:55.435386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.997 [2024-04-26 16:09:55.446214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.446240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.446251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.457241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.457270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.457281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.468244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.468269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.468281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.479349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.479374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.479386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.490243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.490268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.490280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.501132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.501156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.501168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.512144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.512169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.512181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.523036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.523061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.523078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.533899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.533925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.533937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.544821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.544848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.544863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.555709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.555736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.555747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.566621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.566647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.566659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.577588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.577613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.577625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.588700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.588725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.588736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.599898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.599923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.599935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.610835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.610860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.610872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.621822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.621847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.621859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.632807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.632832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.632844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.643895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.643924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.643935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.654830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.654854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.654865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.665907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.665932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.665944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.998 [2024-04-26 16:09:55.676796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:15.998 [2024-04-26 16:09:55.676822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.998 [2024-04-26 16:09:55.676834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.258 [2024-04-26 16:09:55.687740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.258 [2024-04-26 16:09:55.687767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.258 [2024-04-26 16:09:55.687779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.258 [2024-04-26 16:09:55.698782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.258 [2024-04-26 16:09:55.698808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.258 [2024-04-26 16:09:55.698820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.258 [2024-04-26 16:09:55.709608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.258 [2024-04-26 16:09:55.709633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.258 [2024-04-26 16:09:55.709645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.258 [2024-04-26 16:09:55.720477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.720503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.720514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.731579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.731605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.731624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.742587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.742612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.742624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.753766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.753792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.753804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.764765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.764791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.764803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.775616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.775642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.775654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.786504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.786530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.786541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.797427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.797454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.797466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.808482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.808507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.808519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.819551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.819577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.819588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.830468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.830497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.830509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.841698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.841723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.841734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.852615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.852641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.852653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.863502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.863527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.863539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.874394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.874419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.874431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.885229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.885255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.885266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.896125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.896150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.896161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.907308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.907334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.907346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.918325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.918351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.918367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.929171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.929197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.929209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.259 [2024-04-26 16:09:55.940076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.259 [2024-04-26 16:09:55.940102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.259 [2024-04-26 16:09:55.940114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:55.950980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:55.951007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:55.951030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:55.961882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:55.961908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:55.961920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:55.972732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:55.972758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:55.972769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:55.983571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:55.983597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:55.983608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:55.994466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:55.994491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:55.994502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.005288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.005313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.005325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.016106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.016135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.016147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.027081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.027108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.027119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.037918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.037943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.037954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.048784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.048809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.048821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.059636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.059662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.059673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.070544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.070569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.070581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.081446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.081471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.081482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.092312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.092337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.092349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.103117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.103143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.103158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.114031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.114056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.114069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.124852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.124878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.124890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.135721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.135748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.135760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.146594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.146620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.146632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.157689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.520 [2024-04-26 16:09:56.157714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.520 [2024-04-26 16:09:56.157726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.520 [2024-04-26 16:09:56.168639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.521 [2024-04-26 16:09:56.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.521 [2024-04-26 16:09:56.168678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.521 [2024-04-26 16:09:56.179526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.521 [2024-04-26 16:09:56.179551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.521 [2024-04-26 16:09:56.179563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.521 [2024-04-26 16:09:56.190624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.521 [2024-04-26 16:09:56.190649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.521 [2024-04-26 16:09:56.190661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.521 [2024-04-26 16:09:56.201432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.521 [2024-04-26 16:09:56.201462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.521 [2024-04-26 16:09:56.201474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.780 [2024-04-26 16:09:56.212308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.780 [2024-04-26 16:09:56.212334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.780 [2024-04-26 16:09:56.212345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.780 [2024-04-26 16:09:56.223164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.780 [2024-04-26 16:09:56.223190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.780 [2024-04-26 16:09:56.223201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.780 [2024-04-26 16:09:56.233851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:27:16.780 [2024-04-26 16:09:56.233877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.780 [2024-04-26 16:09:56.233889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.780 00:27:16.780 Latency(us) 00:27:16.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.780 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:16.780 nvme0n1 : 2.00 2491.00 311.38 0.00 0.00 6417.97 5271.37 21997.30 00:27:16.780 =================================================================================================================== 00:27:16.780 Total : 2491.00 311.38 0.00 0.00 6417.97 5271.37 21997.30 00:27:16.780 0 00:27:16.780 16:09:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:16.780 16:09:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:16.780 16:09:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:16.780 | .driver_specific 00:27:16.780 | .nvme_error 00:27:16.780 | .status_code 00:27:16.780 | .command_transient_transport_error' 00:27:16.780 16:09:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:16.780 16:09:56 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:27:16.780 16:09:56 -- host/digest.sh@73 -- # killprocess 2592257 00:27:16.780 16:09:56 -- common/autotest_common.sh@936 -- # '[' -z 2592257 ']' 00:27:16.780 16:09:56 -- common/autotest_common.sh@940 -- # kill -0 2592257 00:27:16.780 16:09:56 -- common/autotest_common.sh@941 -- # uname 00:27:16.780 16:09:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:16.780 16:09:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2592257 00:27:17.039 16:09:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:17.039 16:09:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:17.039 16:09:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2592257' 00:27:17.039 killing process with pid 2592257 00:27:17.039 16:09:56 -- common/autotest_common.sh@955 -- # kill 2592257 00:27:17.039 Received shutdown signal, test time was about 2.000000 seconds 00:27:17.039 00:27:17.039 Latency(us) 00:27:17.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.039 =================================================================================================================== 00:27:17.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:17.039 16:09:56 -- common/autotest_common.sh@960 -- # wait 2592257 00:27:17.977 16:09:57 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:17.977 16:09:57 -- host/digest.sh@54 -- # local rw bs qd 00:27:17.977 16:09:57 -- host/digest.sh@56 -- # rw=randwrite 00:27:17.977 16:09:57 -- host/digest.sh@56 -- # bs=4096 00:27:17.977 16:09:57 -- host/digest.sh@56 -- # qd=128 00:27:17.977 16:09:57 -- host/digest.sh@58 -- # bperfpid=2593081 00:27:17.977 16:09:57 -- host/digest.sh@60 -- # waitforlisten 2593081 /var/tmp/bperf.sock 00:27:17.977 16:09:57 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:17.977 16:09:57 -- common/autotest_common.sh@817 -- # '[' -z 2593081 ']' 00:27:17.977 16:09:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:17.977 16:09:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:17.977 16:09:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:17.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:17.977 16:09:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:17.977 16:09:57 -- common/autotest_common.sh@10 -- # set +x 00:27:17.977 [2024-04-26 16:09:57.576694] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:17.977 [2024-04-26 16:09:57.576787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593081 ] 00:27:17.977 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.236 [2024-04-26 16:09:57.682325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.236 [2024-04-26 16:09:57.906708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.805 16:09:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:18.805 16:09:58 -- common/autotest_common.sh@850 -- # return 0 00:27:18.805 16:09:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.805 16:09:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:19.064 16:09:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:19.064 16:09:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.064 16:09:58 -- common/autotest_common.sh@10 -- # set +x 00:27:19.064 16:09:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.064 16:09:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.064 16:09:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.324 nvme0n1 00:27:19.324 16:09:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:19.324 16:09:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:19.324 16:09:58 -- common/autotest_common.sh@10 -- # set +x 00:27:19.324 16:09:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:19.324 16:09:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:19.324 16:09:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.324 Running I/O for 2 seconds... 00:27:19.324 [2024-04-26 16:09:58.937347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:27:19.324 [2024-04-26 16:09:58.938248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.324 [2024-04-26 16:09:58.938289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.324 [2024-04-26 16:09:58.948288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:27:19.324 [2024-04-26 16:09:58.949691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.324 [2024-04-26 16:09:58.949723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:19.324 [2024-04-26 16:09:58.959994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:19.324 [2024-04-26 16:09:58.961379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.324 [2024-04-26 16:09:58.961408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:19.324 [2024-04-26 16:09:58.970547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:19.324 [2024-04-26 16:09:58.971634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.324 [2024-04-26 16:09:58.971664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.324 [2024-04-26 16:09:58.981432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:19.324 [2024-04-26 16:09:58.982412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.324 [2024-04-26 16:09:58.982439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.324 [2024-04-26 16:09:58.992366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:19.324 [2024-04-26 16:09:58.993415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.324 [2024-04-26 16:09:58.993441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.324 [2024-04-26 16:09:59.003313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:19.324 [2024-04-26 16:09:59.004572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.324 [2024-04-26 16:09:59.004599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.584 [2024-04-26 16:09:59.014430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:19.584 [2024-04-26 16:09:59.015387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.584 [2024-04-26 16:09:59.015412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.584 [2024-04-26 16:09:59.025333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:19.584 [2024-04-26 16:09:59.026423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.026448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.036574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:19.585 [2024-04-26 16:09:59.037586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.037612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.047337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:19.585 [2024-04-26 16:09:59.048441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.048467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.058263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:19.585 [2024-04-26 16:09:59.059329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.059355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.069130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:19.585 [2024-04-26 16:09:59.070130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.070156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.080010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:19.585 [2024-04-26 16:09:59.081085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.081110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.090870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:19.585 [2024-04-26 16:09:59.091875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.091900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.101704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:19.585 [2024-04-26 16:09:59.102738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.102764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.112623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:19.585 [2024-04-26 16:09:59.113716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.113741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.123482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:19.585 [2024-04-26 16:09:59.124606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.124631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.134381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:19.585 [2024-04-26 16:09:59.135426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.135452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.145263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:19.585 [2024-04-26 16:09:59.146358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.146383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.156142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:19.585 [2024-04-26 16:09:59.157209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.157234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.167014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:19.585 [2024-04-26 16:09:59.167954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.167980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.177857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:19.585 [2024-04-26 16:09:59.178803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.178830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.188787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:19.585 [2024-04-26 16:09:59.189840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.189866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.199880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:19.585 [2024-04-26 16:09:59.200936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.200962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.210803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:19.585 [2024-04-26 16:09:59.211857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.211883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.221683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:19.585 [2024-04-26 16:09:59.222689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.222715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.232577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:19.585 [2024-04-26 16:09:59.233512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.233538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.243438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:19.585 [2024-04-26 16:09:59.244527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.244552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.254321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:19.585 [2024-04-26 16:09:59.255418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.255444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.585 [2024-04-26 16:09:59.265370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:19.585 [2024-04-26 16:09:59.266464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.585 [2024-04-26 16:09:59.266490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.276442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:19.846 [2024-04-26 16:09:59.277546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.277573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.287312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:19.846 [2024-04-26 16:09:59.288382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.288408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.298178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:19.846 [2024-04-26 16:09:59.299143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.299169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.309020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:19.846 [2024-04-26 16:09:59.310279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.310304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.319916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:19.846 [2024-04-26 16:09:59.321004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.321036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.330770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:19.846 [2024-04-26 16:09:59.331909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.331934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.341610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:19.846 [2024-04-26 16:09:59.342701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.342727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.352518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:19.846 [2024-04-26 16:09:59.353619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.353645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.363406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:19.846 [2024-04-26 16:09:59.364514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.364539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.374204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:19.846 [2024-04-26 16:09:59.375322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.375347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.385124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:19.846 [2024-04-26 16:09:59.386195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.386221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.395961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:19.846 [2024-04-26 16:09:59.397059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.397097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.406866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:19.846 [2024-04-26 16:09:59.407982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.408007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.417705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:19.846 [2024-04-26 16:09:59.418809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.418835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.428566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:19.846 [2024-04-26 16:09:59.429579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.429604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.439434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:19.846 [2024-04-26 16:09:59.440500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.440526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.450480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:19.846 [2024-04-26 16:09:59.451622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.451648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.461424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:19.846 [2024-04-26 16:09:59.462562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.462588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.472385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:19.846 [2024-04-26 16:09:59.473399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.846 [2024-04-26 16:09:59.473424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.846 [2024-04-26 16:09:59.483209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:19.847 [2024-04-26 16:09:59.484156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.847 [2024-04-26 16:09:59.484182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.847 [2024-04-26 16:09:59.494068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:19.847 [2024-04-26 16:09:59.495157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.847 [2024-04-26 16:09:59.495183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.847 [2024-04-26 16:09:59.504904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:19.847 [2024-04-26 16:09:59.505997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.847 [2024-04-26 16:09:59.506025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.847 [2024-04-26 16:09:59.515730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:19.847 [2024-04-26 16:09:59.516822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.847 [2024-04-26 16:09:59.516848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:19.847 [2024-04-26 16:09:59.526778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:19.847 [2024-04-26 16:09:59.527878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.847 [2024-04-26 16:09:59.527904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.537813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:20.107 [2024-04-26 16:09:59.538838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.538865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.548674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:20.107 [2024-04-26 16:09:59.549721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.549747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.559549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:20.107 [2024-04-26 16:09:59.560574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.560600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.570269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:20.107 [2024-04-26 16:09:59.571211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.571236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.581138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:20.107 [2024-04-26 16:09:59.582090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.582115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.592160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:20.107 [2024-04-26 16:09:59.593127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.593154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.603104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:20.107 [2024-04-26 16:09:59.604127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.604153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.613943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:20.107 [2024-04-26 16:09:59.614924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.614949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.624797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:20.107 [2024-04-26 16:09:59.625831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.625857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.635686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:20.107 [2024-04-26 16:09:59.636790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.636815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.646582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:20.107 [2024-04-26 16:09:59.647588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.647613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.657452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:20.107 [2024-04-26 16:09:59.658529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.658555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.668298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:20.107 [2024-04-26 16:09:59.669360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.669385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.679180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:20.107 [2024-04-26 16:09:59.680188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.690026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:20.107 [2024-04-26 16:09:59.691081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.107 [2024-04-26 16:09:59.691106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.107 [2024-04-26 16:09:59.701113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:20.108 [2024-04-26 16:09:59.702176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.108 [2024-04-26 16:09:59.702202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.108 [2024-04-26 16:09:59.712092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:20.108 [2024-04-26 16:09:59.713119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.108 [2024-04-26 16:09:59.713145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.108 [2024-04-26 16:09:59.723012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:20.108 [2024-04-26 16:09:59.724074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.108 [2024-04-26 16:09:59.724100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.108 [2024-04-26 16:09:59.733973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:20.108 [2024-04-26 16:09:59.734993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.108 [2024-04-26 16:09:59.735018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.108 [2024-04-26 16:09:59.744766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:20.108 [2024-04-26 16:09:59.745796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.108 [2024-04-26 16:09:59.745822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.108 [2024-04-26 16:09:59.755619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:20.108 [2024-04-26 16:09:59.756563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.108 [2024-04-26 16:09:59.756589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.108 [2024-04-26 16:09:59.766520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:20.108 [2024-04-26 16:09:59.767589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.108 [2024-04-26 16:09:59.767615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.108 [2024-04-26 16:09:59.777398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:20.108 [2024-04-26 16:09:59.778376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.108 [2024-04-26 16:09:59.778402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.108 [2024-04-26 16:09:59.788521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:20.367 [2024-04-26 16:09:59.789568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.789597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.799586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:20.367 [2024-04-26 16:09:59.800553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.800579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.810440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:20.367 [2024-04-26 16:09:59.811396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.811422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.821589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:20.367 [2024-04-26 16:09:59.822554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.822580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.832483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:20.367 [2024-04-26 16:09:59.833448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.833474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.843500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:20.367 [2024-04-26 16:09:59.844475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.844501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.854589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:20.367 [2024-04-26 16:09:59.855544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.855570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.865454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:20.367 [2024-04-26 16:09:59.866402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.866427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.876317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:20.367 [2024-04-26 16:09:59.877291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.877317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.887187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:20.367 [2024-04-26 16:09:59.888146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.888171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.898057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:20.367 [2024-04-26 16:09:59.899016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.899040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.908966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:20.367 [2024-04-26 16:09:59.909972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.367 [2024-04-26 16:09:59.909998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.367 [2024-04-26 16:09:59.919827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:20.368 [2024-04-26 16:09:59.920806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:09:59.920832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:09:59.930682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:20.368 [2024-04-26 16:09:59.931662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:09:59.931687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:09:59.941917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:20.368 [2024-04-26 16:09:59.942897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:09:59.942923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:09:59.953133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:20.368 [2024-04-26 16:09:59.954176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:09:59.954202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:09:59.964411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:20.368 [2024-04-26 16:09:59.965450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:09:59.965476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:09:59.975827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:20.368 [2024-04-26 16:09:59.976858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:09:59.976888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:09:59.987006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:20.368 [2024-04-26 16:09:59.987994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:09:59.988021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:09:59.998341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:20.368 [2024-04-26 16:09:59.999332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:09:59.999358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:10:00.010871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:20.368 [2024-04-26 16:10:00.012001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:10:00.012028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:10:00.022258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:20.368 [2024-04-26 16:10:00.024021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:10:00.024049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:10:00.034534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:20.368 [2024-04-26 16:10:00.035533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:10:00.035559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.368 [2024-04-26 16:10:00.045953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:20.368 [2024-04-26 16:10:00.046937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.368 [2024-04-26 16:10:00.046963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.629 [2024-04-26 16:10:00.057535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:20.629 [2024-04-26 16:10:00.058532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.629 [2024-04-26 16:10:00.058558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.629 [2024-04-26 16:10:00.068926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:20.629 [2024-04-26 16:10:00.069898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.629 [2024-04-26 16:10:00.069924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.629 [2024-04-26 16:10:00.080216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:20.629 [2024-04-26 16:10:00.081205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.081231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.091531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:20.630 [2024-04-26 16:10:00.093027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.093054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.103155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:20.630 [2024-04-26 16:10:00.104103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.104129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.114278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:20.630 [2024-04-26 16:10:00.115279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.115306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.125434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:20.630 [2024-04-26 16:10:00.126439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.126465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.136585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:20.630 [2024-04-26 16:10:00.137637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.137663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.147686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:20.630 [2024-04-26 16:10:00.148736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.148763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.158834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:20.630 [2024-04-26 16:10:00.159819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.159845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.169963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:20.630 [2024-04-26 16:10:00.170968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.170994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.181162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:20.630 [2024-04-26 16:10:00.182131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.182157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.192300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:20.630 [2024-04-26 16:10:00.193300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.193325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.203568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:20.630 [2024-04-26 16:10:00.204619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.204645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.214821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:20.630 [2024-04-26 16:10:00.215857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.215884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.226058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:20.630 [2024-04-26 16:10:00.227028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.227054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.237226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:20.630 [2024-04-26 16:10:00.238252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.238278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.248282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:20.630 [2024-04-26 16:10:00.249233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.249258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.259187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:20.630 [2024-04-26 16:10:00.260131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.260155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.270065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:20.630 [2024-04-26 16:10:00.271008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.271037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.281064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:20.630 [2024-04-26 16:10:00.282022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.282047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.291924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:20.630 [2024-04-26 16:10:00.292867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.292893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.630 [2024-04-26 16:10:00.302813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:20.630 [2024-04-26 16:10:00.303749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.630 [2024-04-26 16:10:00.303774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.314033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:20.891 [2024-04-26 16:10:00.315023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.315049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.325056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:20.891 [2024-04-26 16:10:00.326037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.326063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.336005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:20.891 [2024-04-26 16:10:00.336953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.336978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.346902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:20.891 [2024-04-26 16:10:00.347849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.347875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.357784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:20.891 [2024-04-26 16:10:00.358721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.358746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.368749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:20.891 [2024-04-26 16:10:00.369702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.369727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.379752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:20.891 [2024-04-26 16:10:00.380732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.380756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.390674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:20.891 [2024-04-26 16:10:00.391633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.391658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.401635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:20.891 [2024-04-26 16:10:00.402676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.402702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.412563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:20.891 [2024-04-26 16:10:00.413557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.891 [2024-04-26 16:10:00.413582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.891 [2024-04-26 16:10:00.423547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:20.892 [2024-04-26 16:10:00.424547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.424572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.434537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:20.892 [2024-04-26 16:10:00.435531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.435556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.445388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:20.892 [2024-04-26 16:10:00.446444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.446470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.456545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:20.892 [2024-04-26 16:10:00.457549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.457579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.467610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:20.892 [2024-04-26 16:10:00.468591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.468617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.478626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:20.892 [2024-04-26 16:10:00.479601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.479626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.489675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:20.892 [2024-04-26 16:10:00.490654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.490680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.500661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:20.892 [2024-04-26 16:10:00.501615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.501640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.511562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:20.892 [2024-04-26 16:10:00.512524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.512549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.522436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:20.892 [2024-04-26 16:10:00.523393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.523419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.533358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:20.892 [2024-04-26 16:10:00.534394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.534421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.544326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:20.892 [2024-04-26 16:10:00.545268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.545293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.555194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:20.892 [2024-04-26 16:10:00.556151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.556176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:20.892 [2024-04-26 16:10:00.566049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:20.892 [2024-04-26 16:10:00.567110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.892 [2024-04-26 16:10:00.567135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.152 [2024-04-26 16:10:00.577572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:21.152 [2024-04-26 16:10:00.578633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.152 [2024-04-26 16:10:00.578658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.152 [2024-04-26 16:10:00.588520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:21.152 [2024-04-26 16:10:00.589480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.589506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.599379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:21.153 [2024-04-26 16:10:00.600420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.600446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.610222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:21.153 [2024-04-26 16:10:00.611178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.611203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.621121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:21.153 [2024-04-26 16:10:00.622060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.622090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.631988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:21.153 [2024-04-26 16:10:00.632944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.632969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.642928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:21.153 [2024-04-26 16:10:00.643922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.643948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.653838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:21.153 [2024-04-26 16:10:00.654900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.654926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.664810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:21.153 [2024-04-26 16:10:00.665799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.665825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.675776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:21.153 [2024-04-26 16:10:00.676750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.676775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.686650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:21.153 [2024-04-26 16:10:00.687605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.687630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.697558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:21.153 [2024-04-26 16:10:00.698550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.698575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.708586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:21.153 [2024-04-26 16:10:00.709585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.709615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.719560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:21.153 [2024-04-26 16:10:00.720560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.720585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.730569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:21.153 [2024-04-26 16:10:00.731525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.731550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.741434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:27:21.153 [2024-04-26 16:10:00.742404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.742430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.752309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:27:21.153 [2024-04-26 16:10:00.753265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.753289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.763176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:27:21.153 [2024-04-26 16:10:00.764121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.764147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.774019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:27:21.153 [2024-04-26 16:10:00.774983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.775009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.784963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:27:21.153 [2024-04-26 16:10:00.785918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.785943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.795821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:27:21.153 [2024-04-26 16:10:00.796758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.796783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.806676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:27:21.153 [2024-04-26 16:10:00.807710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.807735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.817577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:27:21.153 [2024-04-26 16:10:00.818788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.818813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.153 [2024-04-26 16:10:00.828624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:27:21.153 [2024-04-26 16:10:00.829709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.153 [2024-04-26 16:10:00.829734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 [2024-04-26 16:10:00.840171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:27:21.413 [2024-04-26 16:10:00.841210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.413 [2024-04-26 16:10:00.841236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 [2024-04-26 16:10:00.851150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:27:21.413 [2024-04-26 16:10:00.852100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.413 [2024-04-26 16:10:00.852125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 [2024-04-26 16:10:00.862012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fcdd0 00:27:21.413 [2024-04-26 16:10:00.862956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.413 [2024-04-26 16:10:00.862981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 [2024-04-26 16:10:00.872885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb8b8 00:27:21.413 [2024-04-26 16:10:00.873837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.413 [2024-04-26 16:10:00.873863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 [2024-04-26 16:10:00.883744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:27:21.413 [2024-04-26 16:10:00.884702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.413 [2024-04-26 16:10:00.884727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 [2024-04-26 16:10:00.894630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:27:21.413 [2024-04-26 16:10:00.895631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.413 [2024-04-26 16:10:00.895656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 [2024-04-26 16:10:00.905566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:27:21.413 [2024-04-26 16:10:00.906666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.413 [2024-04-26 16:10:00.906692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 [2024-04-26 16:10:00.916393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:27:21.413 [2024-04-26 16:10:00.917492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:21.413 [2024-04-26 16:10:00.917518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:21.413 00:27:21.413 Latency(us) 00:27:21.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.413 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:21.413 nvme0n1 : 2.00 23029.19 89.96 0.00 0.00 5549.19 3476.26 22567.18 00:27:21.413 =================================================================================================================== 00:27:21.413 Total : 23029.19 89.96 0.00 0.00 5549.19 3476.26 22567.18 00:27:21.413 0 00:27:21.413 16:10:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:21.413 16:10:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:21.413 16:10:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:21.413 | .driver_specific 00:27:21.413 | .nvme_error 00:27:21.413 | .status_code 00:27:21.413 | .command_transient_transport_error' 00:27:21.413 16:10:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:21.685 16:10:01 -- host/digest.sh@71 -- # (( 181 > 0 )) 00:27:21.685 16:10:01 -- host/digest.sh@73 -- # killprocess 2593081 00:27:21.685 16:10:01 -- common/autotest_common.sh@936 -- # '[' -z 2593081 ']' 00:27:21.685 16:10:01 -- common/autotest_common.sh@940 -- # kill -0 2593081 00:27:21.685 16:10:01 -- common/autotest_common.sh@941 -- # uname 00:27:21.685 16:10:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:21.685 16:10:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2593081 00:27:21.685 16:10:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:21.685 16:10:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:21.685 16:10:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2593081' 00:27:21.685 killing process with pid 2593081 00:27:21.685 16:10:01 -- common/autotest_common.sh@955 -- # kill 2593081 00:27:21.685 Received shutdown signal, test time was about 2.000000 seconds 00:27:21.685 00:27:21.685 Latency(us) 00:27:21.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.685 =================================================================================================================== 00:27:21.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:21.685 16:10:01 -- common/autotest_common.sh@960 -- # wait 2593081 00:27:22.733 16:10:02 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:22.733 16:10:02 -- host/digest.sh@54 -- # local rw bs qd 00:27:22.733 16:10:02 -- host/digest.sh@56 -- # rw=randwrite 00:27:22.733 16:10:02 -- host/digest.sh@56 -- # bs=131072 00:27:22.733 16:10:02 -- host/digest.sh@56 -- # qd=16 00:27:22.733 16:10:02 -- host/digest.sh@58 -- # bperfpid=2593788 00:27:22.733 16:10:02 -- host/digest.sh@60 -- # waitforlisten 2593788 /var/tmp/bperf.sock 00:27:22.733 16:10:02 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:22.733 16:10:02 -- common/autotest_common.sh@817 -- # '[' -z 2593788 ']' 00:27:22.733 16:10:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:22.733 16:10:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:22.733 16:10:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:22.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:22.733 16:10:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:22.733 16:10:02 -- common/autotest_common.sh@10 -- # set +x 00:27:22.733 [2024-04-26 16:10:02.258747] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:22.733 [2024-04-26 16:10:02.258840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593788 ] 00:27:22.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:22.733 Zero copy mechanism will not be used. 00:27:22.733 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.733 [2024-04-26 16:10:02.361920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.993 [2024-04-26 16:10:02.590253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.561 16:10:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:23.561 16:10:03 -- common/autotest_common.sh@850 -- # return 0 00:27:23.561 16:10:03 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:23.561 16:10:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:23.561 16:10:03 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:23.561 16:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.561 16:10:03 -- common/autotest_common.sh@10 -- # set +x 00:27:23.561 16:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.561 16:10:03 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.561 16:10:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.128 nvme0n1 00:27:24.128 16:10:03 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:24.128 16:10:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.128 16:10:03 -- common/autotest_common.sh@10 -- # set +x 00:27:24.128 16:10:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.128 16:10:03 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:24.128 16:10:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:24.128 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.128 Zero copy mechanism will not be used. 00:27:24.128 Running I/O for 2 seconds... 00:27:24.128 [2024-04-26 16:10:03.725501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.128 [2024-04-26 16:10:03.726170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.128 [2024-04-26 16:10:03.726211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.128 [2024-04-26 16:10:03.740643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.128 [2024-04-26 16:10:03.741123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.128 [2024-04-26 16:10:03.741155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.128 [2024-04-26 16:10:03.754909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.128 [2024-04-26 16:10:03.755392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.128 [2024-04-26 16:10:03.755421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.128 [2024-04-26 16:10:03.768840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.128 [2024-04-26 16:10:03.769465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.128 [2024-04-26 16:10:03.769493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.128 [2024-04-26 16:10:03.783591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.128 [2024-04-26 16:10:03.784065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.128 [2024-04-26 16:10:03.784113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.128 [2024-04-26 16:10:03.798919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.128 [2024-04-26 16:10:03.799433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.128 [2024-04-26 16:10:03.799460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.818876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.819509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.819537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.838231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.838741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.838769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.852010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.852496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.852524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.866266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.866784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.866811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.880813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.881295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.881321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.903130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.903854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.903881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.921127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.921628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.921655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.943240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.943984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.944014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.960698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.961172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.961198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.976209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.976690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.976716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:03.990844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:03.991360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:03.991387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:04.006386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:04.006902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:04.006928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:04.020329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:04.020825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:04.020852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:04.035305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:04.035769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:04.035796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:04.050058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:04.050640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.388 [2024-04-26 16:10:04.050667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.388 [2024-04-26 16:10:04.065324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.388 [2024-04-26 16:10:04.065814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.389 [2024-04-26 16:10:04.065842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.648 [2024-04-26 16:10:04.078955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.079532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.079559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.093316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.093797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.093825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.107225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.107704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.107731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.120974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.121461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.121488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.135348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.135915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.135942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.149981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.150469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.150495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.164117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.164434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.164460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.178901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.179393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.179421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.193498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.193970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.193997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.206735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.207242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.207271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.220697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.221190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.221218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.235168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.235672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.235700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.248900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.249402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.249430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.262137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.262739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.262767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.276559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.276892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.276920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.290045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.290635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.290663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.302177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.302797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.302824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.314135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.314626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.314653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.649 [2024-04-26 16:10:04.327015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.649 [2024-04-26 16:10:04.327592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.649 [2024-04-26 16:10:04.327620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.340274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.909 [2024-04-26 16:10:04.340873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.909 [2024-04-26 16:10:04.340899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.353410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.909 [2024-04-26 16:10:04.353991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.909 [2024-04-26 16:10:04.354019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.366842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.909 [2024-04-26 16:10:04.367332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.909 [2024-04-26 16:10:04.367358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.379584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.909 [2024-04-26 16:10:04.380248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.909 [2024-04-26 16:10:04.380275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.392582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.909 [2024-04-26 16:10:04.393294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.909 [2024-04-26 16:10:04.393321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.406439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.909 [2024-04-26 16:10:04.407038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.909 [2024-04-26 16:10:04.407065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.420200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.909 [2024-04-26 16:10:04.420782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.909 [2024-04-26 16:10:04.420809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.433994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.909 [2024-04-26 16:10:04.434543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.909 [2024-04-26 16:10:04.434570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.909 [2024-04-26 16:10:04.447704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.448279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.448306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.461242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.461857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.461883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.474581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.475226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.475252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.487331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.487992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.488020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.500305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.500956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.500983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.513914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.514633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.514660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.526115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.526698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.526725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.538676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.539352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.539379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.552457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.553135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.553161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.565593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.566094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.566119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.578655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:24.910 [2024-04-26 16:10:04.579235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.910 [2024-04-26 16:10:04.579262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.910 [2024-04-26 16:10:04.591493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.592037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.592063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.602578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.603055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.603085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.615901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.616520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.616546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.629965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.630591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.630618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.643305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.643976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.644002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.657885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.658575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.658601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.671019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.671598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.671624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.684169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.684750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.684775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.697660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.698271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.698298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.710121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.710748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.710774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.723979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.724460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.724485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.737077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.170 [2024-04-26 16:10:04.737613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.170 [2024-04-26 16:10:04.737639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.170 [2024-04-26 16:10:04.750178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.171 [2024-04-26 16:10:04.750714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.171 [2024-04-26 16:10:04.750740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.171 [2024-04-26 16:10:04.763140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.171 [2024-04-26 16:10:04.763736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.171 [2024-04-26 16:10:04.763768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.171 [2024-04-26 16:10:04.776134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.171 [2024-04-26 16:10:04.776739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.171 [2024-04-26 16:10:04.776765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.171 [2024-04-26 16:10:04.788592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.171 [2024-04-26 16:10:04.789054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.171 [2024-04-26 16:10:04.789086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.171 [2024-04-26 16:10:04.801154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.171 [2024-04-26 16:10:04.801710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.171 [2024-04-26 16:10:04.801737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.171 [2024-04-26 16:10:04.814621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.171 [2024-04-26 16:10:04.815394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.171 [2024-04-26 16:10:04.815420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.171 [2024-04-26 16:10:04.827952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.171 [2024-04-26 16:10:04.828620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.171 [2024-04-26 16:10:04.828647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.171 [2024-04-26 16:10:04.841552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.171 [2024-04-26 16:10:04.842047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.171 [2024-04-26 16:10:04.842079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.853590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.854034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.854061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.864862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.865371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.865397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.876857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.877410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.877436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.889492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.890040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.890066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.901882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.902498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.902526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.915519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.916266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.916293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.929853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.930473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.930500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.944563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.945245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.945272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.958329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.958930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.958956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.970747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.971477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.971503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.983583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.984104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.984135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:04.996676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:04.997402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:04.997430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:05.009966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:05.010583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:05.010610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:05.024501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:05.025101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:05.025129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:05.039386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:05.039869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:05.039895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:05.053467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:05.054153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:05.054180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:05.066980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:05.067553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:05.067579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:05.080145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:05.080673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:05.080699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:05.092785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:05.093465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:05.093491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.431 [2024-04-26 16:10:05.106523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.431 [2024-04-26 16:10:05.107092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.431 [2024-04-26 16:10:05.107135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.691 [2024-04-26 16:10:05.118465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.691 [2024-04-26 16:10:05.118970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.691 [2024-04-26 16:10:05.118995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.691 [2024-04-26 16:10:05.132155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.691 [2024-04-26 16:10:05.132680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.691 [2024-04-26 16:10:05.132706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.691 [2024-04-26 16:10:05.145035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.691 [2024-04-26 16:10:05.145522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.691 [2024-04-26 16:10:05.145548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.691 [2024-04-26 16:10:05.159037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.691 [2024-04-26 16:10:05.159714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.691 [2024-04-26 16:10:05.159739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.691 [2024-04-26 16:10:05.172523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.691 [2024-04-26 16:10:05.173032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.691 [2024-04-26 16:10:05.173058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.691 [2024-04-26 16:10:05.185139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.691 [2024-04-26 16:10:05.185665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.185691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.199295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.199942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.199969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.211705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.212337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.212368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.225656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.226170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.226197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.238965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.239544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.239570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.251898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.252401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.252427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.265866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.266360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.266386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.279454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.279886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.279913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.293958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.294475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.294501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.308637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.309204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.309230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.322034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.322517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.322542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.335713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.336403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.336430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.349691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.350310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.350347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.692 [2024-04-26 16:10:05.363675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.692 [2024-04-26 16:10:05.364246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.692 [2024-04-26 16:10:05.364272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.952 [2024-04-26 16:10:05.377435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.952 [2024-04-26 16:10:05.377877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.952 [2024-04-26 16:10:05.377904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.952 [2024-04-26 16:10:05.391343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.952 [2024-04-26 16:10:05.391789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.952 [2024-04-26 16:10:05.391815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.952 [2024-04-26 16:10:05.404751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.952 [2024-04-26 16:10:05.405188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.952 [2024-04-26 16:10:05.405215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.952 [2024-04-26 16:10:05.416741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.952 [2024-04-26 16:10:05.417201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.952 [2024-04-26 16:10:05.417227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.952 [2024-04-26 16:10:05.430537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.952 [2024-04-26 16:10:05.431067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.952 [2024-04-26 16:10:05.431099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.952 [2024-04-26 16:10:05.443461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.952 [2024-04-26 16:10:05.443919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.952 [2024-04-26 16:10:05.443958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.952 [2024-04-26 16:10:05.455702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.456263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.456289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.468231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.468698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.468723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.481917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.482326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.482352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.495642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.496067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.496116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.508914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.509426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.509452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.522671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.523162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.523190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.536549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.537056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.537089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.550267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.550756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.550784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.562618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.563130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.563157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.575559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.575941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.575967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.587935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.588424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.588450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.602673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.603035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.603061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.615859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.616313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.616339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:25.953 [2024-04-26 16:10:05.629515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:25.953 [2024-04-26 16:10:05.629917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.953 [2024-04-26 16:10:05.629944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.213 [2024-04-26 16:10:05.641643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:26.213 [2024-04-26 16:10:05.642225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.213 [2024-04-26 16:10:05.642252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.213 [2024-04-26 16:10:05.655326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:26.213 [2024-04-26 16:10:05.655921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.213 [2024-04-26 16:10:05.655948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.213 [2024-04-26 16:10:05.668865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:26.213 [2024-04-26 16:10:05.669331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.213 [2024-04-26 16:10:05.669356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.213 [2024-04-26 16:10:05.681407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:26.213 [2024-04-26 16:10:05.681958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.213 [2024-04-26 16:10:05.681984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.213 [2024-04-26 16:10:05.696573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:27:26.213 [2024-04-26 16:10:05.697028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.213 [2024-04-26 16:10:05.697054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.213 00:27:26.213 Latency(us) 00:27:26.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.213 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:26.213 nvme0n1 : 2.01 2228.66 278.58 0.00 0.00 7162.17 5157.40 25758.50 00:27:26.213 =================================================================================================================== 00:27:26.213 Total : 2228.66 278.58 0.00 0.00 7162.17 5157.40 25758.50 00:27:26.213 0 00:27:26.213 16:10:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:26.213 16:10:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:26.213 16:10:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:26.213 | .driver_specific 00:27:26.213 | .nvme_error 00:27:26.213 | .status_code 00:27:26.213 | .command_transient_transport_error' 00:27:26.213 16:10:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:26.472 16:10:05 -- host/digest.sh@71 -- # (( 144 > 0 )) 00:27:26.473 16:10:05 -- host/digest.sh@73 -- # killprocess 2593788 00:27:26.473 16:10:05 -- common/autotest_common.sh@936 -- # '[' -z 2593788 ']' 00:27:26.473 16:10:05 -- common/autotest_common.sh@940 -- # kill -0 2593788 00:27:26.473 16:10:05 -- common/autotest_common.sh@941 -- # uname 00:27:26.473 16:10:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:26.473 16:10:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2593788 00:27:26.473 16:10:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:26.473 16:10:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:26.473 16:10:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2593788' 00:27:26.473 killing process with pid 2593788 00:27:26.473 16:10:05 -- common/autotest_common.sh@955 -- # kill 2593788 00:27:26.473 Received shutdown signal, test time was about 2.000000 seconds 00:27:26.473 00:27:26.473 Latency(us) 00:27:26.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.473 =================================================================================================================== 00:27:26.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.473 16:10:05 -- common/autotest_common.sh@960 -- # wait 2593788 00:27:27.409 16:10:06 -- host/digest.sh@116 -- # killprocess 2591215 00:27:27.409 16:10:06 -- common/autotest_common.sh@936 -- # '[' -z 2591215 ']' 00:27:27.409 16:10:06 -- common/autotest_common.sh@940 -- # kill -0 2591215 00:27:27.409 16:10:06 -- common/autotest_common.sh@941 -- # uname 00:27:27.409 16:10:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:27.409 16:10:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2591215 00:27:27.409 16:10:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:27.409 16:10:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:27.409 16:10:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2591215' 00:27:27.409 killing process with pid 2591215 00:27:27.409 16:10:07 -- common/autotest_common.sh@955 -- # kill 2591215 00:27:27.409 16:10:07 -- common/autotest_common.sh@960 -- # wait 2591215 00:27:28.789 00:27:28.789 real 0m21.300s 00:27:28.789 user 0m40.262s 00:27:28.789 sys 0m4.012s 00:27:28.789 16:10:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:28.789 16:10:08 -- common/autotest_common.sh@10 -- # set +x 00:27:28.789 ************************************ 00:27:28.789 END TEST nvmf_digest_error 00:27:28.789 ************************************ 00:27:28.789 16:10:08 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:28.789 16:10:08 -- host/digest.sh@150 -- # nvmftestfini 00:27:28.789 16:10:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:28.789 16:10:08 -- nvmf/common.sh@117 -- # sync 00:27:28.789 16:10:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.789 16:10:08 -- nvmf/common.sh@120 -- # set +e 00:27:28.789 16:10:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.789 16:10:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.789 rmmod nvme_tcp 00:27:28.789 rmmod nvme_fabrics 00:27:28.789 rmmod nvme_keyring 00:27:28.789 16:10:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.789 16:10:08 -- nvmf/common.sh@124 -- # set -e 00:27:28.789 16:10:08 -- nvmf/common.sh@125 -- # return 0 00:27:28.789 16:10:08 -- nvmf/common.sh@478 -- # '[' -n 2591215 ']' 00:27:28.789 16:10:08 -- nvmf/common.sh@479 -- # killprocess 2591215 00:27:28.789 16:10:08 -- common/autotest_common.sh@936 -- # '[' -z 2591215 ']' 00:27:28.789 16:10:08 -- common/autotest_common.sh@940 -- # kill -0 2591215 00:27:28.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2591215) - No such process 00:27:28.789 16:10:08 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2591215 is not found' 00:27:28.789 Process with pid 2591215 is not found 00:27:28.789 16:10:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:28.789 16:10:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:28.789 16:10:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:28.789 16:10:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.789 16:10:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.789 16:10:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.789 16:10:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.789 16:10:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.328 16:10:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.328 00:27:31.328 real 0m51.722s 00:27:31.328 user 1m24.988s 00:27:31.328 sys 0m12.050s 00:27:31.328 16:10:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:31.328 16:10:10 -- common/autotest_common.sh@10 -- # set +x 00:27:31.328 ************************************ 00:27:31.328 END TEST nvmf_digest 00:27:31.328 ************************************ 00:27:31.328 16:10:10 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:27:31.328 16:10:10 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:27:31.328 16:10:10 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:27:31.328 16:10:10 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:31.328 16:10:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:31.328 16:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:31.328 16:10:10 -- common/autotest_common.sh@10 -- # set +x 00:27:31.328 ************************************ 00:27:31.328 START TEST nvmf_bdevperf 00:27:31.328 ************************************ 00:27:31.328 16:10:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:31.328 * Looking for test storage... 00:27:31.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:31.328 16:10:10 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:31.328 16:10:10 -- nvmf/common.sh@7 -- # uname -s 00:27:31.328 16:10:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.328 16:10:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.328 16:10:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.328 16:10:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.328 16:10:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.328 16:10:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.328 16:10:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.328 16:10:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.328 16:10:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.328 16:10:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.328 16:10:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.328 16:10:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:31.328 16:10:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.328 16:10:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.328 16:10:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:31.328 16:10:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.328 16:10:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:31.328 16:10:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.328 16:10:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.328 16:10:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.328 16:10:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.328 16:10:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.329 16:10:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.329 16:10:10 -- paths/export.sh@5 -- # export PATH 00:27:31.329 16:10:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.329 16:10:10 -- nvmf/common.sh@47 -- # : 0 00:27:31.329 16:10:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.329 16:10:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.329 16:10:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.329 16:10:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.329 16:10:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.329 16:10:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.329 16:10:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.329 16:10:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.329 16:10:10 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:31.329 16:10:10 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:31.329 16:10:10 -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:31.329 16:10:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:31.329 16:10:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.329 16:10:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:31.329 16:10:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:31.329 16:10:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:31.329 16:10:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.329 16:10:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:31.329 16:10:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.329 16:10:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:31.329 16:10:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:31.329 16:10:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:31.329 16:10:10 -- common/autotest_common.sh@10 -- # set +x 00:27:36.600 16:10:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:36.600 16:10:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:36.600 16:10:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:36.600 16:10:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:36.600 16:10:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:36.600 16:10:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:36.600 16:10:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:36.600 16:10:15 -- nvmf/common.sh@295 -- # net_devs=() 00:27:36.600 16:10:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:36.600 16:10:15 -- nvmf/common.sh@296 -- # e810=() 00:27:36.600 16:10:15 -- nvmf/common.sh@296 -- # local -ga e810 00:27:36.600 16:10:15 -- nvmf/common.sh@297 -- # x722=() 00:27:36.600 16:10:15 -- nvmf/common.sh@297 -- # local -ga x722 00:27:36.600 16:10:15 -- nvmf/common.sh@298 -- # mlx=() 00:27:36.600 16:10:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:36.600 16:10:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:36.600 16:10:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:36.600 16:10:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:36.600 16:10:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:36.600 16:10:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.600 16:10:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:36.600 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:36.600 16:10:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:36.600 16:10:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:36.600 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:36.600 16:10:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:36.600 16:10:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.600 16:10:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.600 16:10:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:36.600 16:10:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.600 16:10:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:36.600 Found net devices under 0000:86:00.0: cvl_0_0 00:27:36.600 16:10:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.600 16:10:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:36.600 16:10:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:36.600 16:10:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:36.600 16:10:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:36.600 16:10:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:36.600 Found net devices under 0000:86:00.1: cvl_0_1 00:27:36.600 16:10:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:36.600 16:10:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:36.600 16:10:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:36.600 16:10:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:36.600 16:10:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:36.600 16:10:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.600 16:10:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:36.600 16:10:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:36.600 16:10:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:36.600 16:10:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:36.600 16:10:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:36.600 16:10:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:36.600 16:10:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:36.600 16:10:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.600 16:10:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:36.600 16:10:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:36.600 16:10:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:36.600 16:10:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.600 16:10:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.600 16:10:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.600 16:10:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.600 16:10:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.600 16:10:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.600 16:10:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.600 16:10:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:27:36.600 00:27:36.600 --- 10.0.0.2 ping statistics --- 00:27:36.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.600 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:36.600 16:10:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:27:36.600 00:27:36.600 --- 10.0.0.1 ping statistics --- 00:27:36.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.600 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:27:36.600 16:10:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.600 16:10:16 -- nvmf/common.sh@411 -- # return 0 00:27:36.600 16:10:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:36.600 16:10:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.600 16:10:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:36.600 16:10:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:36.600 16:10:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.600 16:10:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:36.600 16:10:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:36.600 16:10:16 -- host/bdevperf.sh@25 -- # tgt_init 00:27:36.600 16:10:16 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:36.600 16:10:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:36.600 16:10:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:36.601 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:27:36.601 16:10:16 -- nvmf/common.sh@470 -- # nvmfpid=2598255 00:27:36.601 16:10:16 -- nvmf/common.sh@471 -- # waitforlisten 2598255 00:27:36.601 16:10:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:36.601 16:10:16 -- common/autotest_common.sh@817 -- # '[' -z 2598255 ']' 00:27:36.601 16:10:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.601 16:10:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:36.601 16:10:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.601 16:10:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:36.601 16:10:16 -- common/autotest_common.sh@10 -- # set +x 00:27:36.860 [2024-04-26 16:10:16.324011] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:36.860 [2024-04-26 16:10:16.324095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.860 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.860 [2024-04-26 16:10:16.432342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:37.119 [2024-04-26 16:10:16.648188] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.119 [2024-04-26 16:10:16.648234] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.119 [2024-04-26 16:10:16.648244] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.119 [2024-04-26 16:10:16.648272] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.119 [2024-04-26 16:10:16.648283] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.119 [2024-04-26 16:10:16.648546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.120 [2024-04-26 16:10:16.648608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.120 [2024-04-26 16:10:16.648629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.687 16:10:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:37.687 16:10:17 -- common/autotest_common.sh@850 -- # return 0 00:27:37.687 16:10:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:37.687 16:10:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:37.687 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:27:37.687 16:10:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.687 16:10:17 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.687 16:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.687 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:27:37.687 [2024-04-26 16:10:17.141469] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.687 16:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.687 16:10:17 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:37.687 16:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.687 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:27:37.687 Malloc0 00:27:37.687 16:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.687 16:10:17 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.687 16:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.687 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:27:37.687 16:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.687 16:10:17 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:37.687 16:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.687 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:27:37.687 16:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.687 16:10:17 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.687 16:10:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.687 16:10:17 -- common/autotest_common.sh@10 -- # set +x 00:27:37.687 [2024-04-26 16:10:17.281414] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.687 16:10:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.687 16:10:17 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:37.687 16:10:17 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:37.687 16:10:17 -- nvmf/common.sh@521 -- # config=() 00:27:37.687 16:10:17 -- nvmf/common.sh@521 -- # local subsystem config 00:27:37.687 16:10:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:37.687 16:10:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:37.687 { 00:27:37.687 "params": { 00:27:37.687 "name": "Nvme$subsystem", 00:27:37.687 "trtype": "$TEST_TRANSPORT", 00:27:37.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.687 "adrfam": "ipv4", 00:27:37.687 "trsvcid": "$NVMF_PORT", 00:27:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.687 "hdgst": ${hdgst:-false}, 00:27:37.687 "ddgst": ${ddgst:-false} 00:27:37.687 }, 00:27:37.687 "method": "bdev_nvme_attach_controller" 00:27:37.687 } 00:27:37.687 EOF 00:27:37.687 )") 00:27:37.687 16:10:17 -- nvmf/common.sh@543 -- # cat 00:27:37.687 16:10:17 -- nvmf/common.sh@545 -- # jq . 00:27:37.687 16:10:17 -- nvmf/common.sh@546 -- # IFS=, 00:27:37.687 16:10:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:37.687 "params": { 00:27:37.687 "name": "Nvme1", 00:27:37.687 "trtype": "tcp", 00:27:37.687 "traddr": "10.0.0.2", 00:27:37.687 "adrfam": "ipv4", 00:27:37.687 "trsvcid": "4420", 00:27:37.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.687 "hdgst": false, 00:27:37.687 "ddgst": false 00:27:37.687 }, 00:27:37.687 "method": "bdev_nvme_attach_controller" 00:27:37.687 }' 00:27:37.687 [2024-04-26 16:10:17.359336] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:37.687 [2024-04-26 16:10:17.359426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598478 ] 00:27:37.946 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.946 [2024-04-26 16:10:17.465552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.205 [2024-04-26 16:10:17.700483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.773 Running I/O for 1 seconds... 00:27:39.709 00:27:39.709 Latency(us) 00:27:39.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.709 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:39.709 Verification LBA range: start 0x0 length 0x4000 00:27:39.709 Nvme1n1 : 1.01 9605.19 37.52 0.00 0.00 13256.09 2678.43 12252.38 00:27:39.709 =================================================================================================================== 00:27:39.709 Total : 9605.19 37.52 0.00 0.00 13256.09 2678.43 12252.38 00:27:41.087 16:10:20 -- host/bdevperf.sh@30 -- # bdevperfpid=2598967 00:27:41.087 16:10:20 -- host/bdevperf.sh@32 -- # sleep 3 00:27:41.087 16:10:20 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:41.087 16:10:20 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:41.087 16:10:20 -- nvmf/common.sh@521 -- # config=() 00:27:41.087 16:10:20 -- nvmf/common.sh@521 -- # local subsystem config 00:27:41.087 16:10:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:41.087 16:10:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:41.087 { 00:27:41.087 "params": { 00:27:41.087 "name": "Nvme$subsystem", 00:27:41.087 "trtype": "$TEST_TRANSPORT", 00:27:41.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.087 "adrfam": "ipv4", 00:27:41.087 "trsvcid": "$NVMF_PORT", 00:27:41.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.087 "hdgst": ${hdgst:-false}, 00:27:41.087 "ddgst": ${ddgst:-false} 00:27:41.087 }, 00:27:41.087 "method": "bdev_nvme_attach_controller" 00:27:41.087 } 00:27:41.087 EOF 00:27:41.087 )") 00:27:41.087 16:10:20 -- nvmf/common.sh@543 -- # cat 00:27:41.087 16:10:20 -- nvmf/common.sh@545 -- # jq . 00:27:41.087 16:10:20 -- nvmf/common.sh@546 -- # IFS=, 00:27:41.087 16:10:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:41.087 "params": { 00:27:41.087 "name": "Nvme1", 00:27:41.087 "trtype": "tcp", 00:27:41.087 "traddr": "10.0.0.2", 00:27:41.087 "adrfam": "ipv4", 00:27:41.087 "trsvcid": "4420", 00:27:41.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:41.087 "hdgst": false, 00:27:41.087 "ddgst": false 00:27:41.087 }, 00:27:41.087 "method": "bdev_nvme_attach_controller" 00:27:41.087 }' 00:27:41.087 [2024-04-26 16:10:20.416276] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:41.087 [2024-04-26 16:10:20.416366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598967 ] 00:27:41.087 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.087 [2024-04-26 16:10:20.519109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.087 [2024-04-26 16:10:20.753455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.655 Running I/O for 15 seconds... 00:27:44.197 16:10:23 -- host/bdevperf.sh@33 -- # kill -9 2598255 00:27:44.197 16:10:23 -- host/bdevperf.sh@35 -- # sleep 3 00:27:44.197 [2024-04-26 16:10:23.375239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.197 [2024-04-26 16:10:23.375948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.197 [2024-04-26 16:10:23.375957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.375968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.375977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.375988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.375997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.198 [2024-04-26 16:10:23.376678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.198 [2024-04-26 16:10:23.376687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.376986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.376994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.199 [2024-04-26 16:10:23.377311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.199 [2024-04-26 16:10:23.377332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.199 [2024-04-26 16:10:23.377352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.199 [2024-04-26 16:10:23.377372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.199 [2024-04-26 16:10:23.377393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.199 [2024-04-26 16:10:23.377414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.199 [2024-04-26 16:10:23.377434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.199 [2024-04-26 16:10:23.377445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.200 [2024-04-26 16:10:23.377635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.200 [2024-04-26 16:10:23.377946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.377957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007e40 is same with the state(5) to be set 00:27:44.200 [2024-04-26 16:10:23.377970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:44.200 [2024-04-26 16:10:23.377979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:44.200 [2024-04-26 16:10:23.377988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26120 len:8 PRP1 0x0 PRP2 0x0 00:27:44.200 [2024-04-26 16:10:23.377998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.200 [2024-04-26 16:10:23.378283] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:27:44.200 [2024-04-26 16:10:23.381456] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.200 [2024-04-26 16:10:23.381538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.200 [2024-04-26 16:10:23.382333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.200 [2024-04-26 16:10:23.382750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.200 [2024-04-26 16:10:23.382766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.200 [2024-04-26 16:10:23.382777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.200 [2024-04-26 16:10:23.382999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.200 [2024-04-26 16:10:23.383207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.200 [2024-04-26 16:10:23.383226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.200 [2024-04-26 16:10:23.383238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.200 [2024-04-26 16:10:23.386352] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.200 [2024-04-26 16:10:23.395134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.200 [2024-04-26 16:10:23.395834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.200 [2024-04-26 16:10:23.396362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.200 [2024-04-26 16:10:23.396408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.200 [2024-04-26 16:10:23.396440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.200 [2024-04-26 16:10:23.396984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.200 [2024-04-26 16:10:23.397198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.200 [2024-04-26 16:10:23.397210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.200 [2024-04-26 16:10:23.397223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.200 [2024-04-26 16:10:23.400310] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.200 [2024-04-26 16:10:23.408306] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.200 [2024-04-26 16:10:23.408959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.409367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.409381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.201 [2024-04-26 16:10:23.409391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.201 [2024-04-26 16:10:23.409585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.201 [2024-04-26 16:10:23.409777] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.201 [2024-04-26 16:10:23.409788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.201 [2024-04-26 16:10:23.409797] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.201 [2024-04-26 16:10:23.412730] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.201 [2024-04-26 16:10:23.421421] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.201 [2024-04-26 16:10:23.422086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.422590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.422632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.201 [2024-04-26 16:10:23.422662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.201 [2024-04-26 16:10:23.423147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.201 [2024-04-26 16:10:23.423339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.201 [2024-04-26 16:10:23.423350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.201 [2024-04-26 16:10:23.423358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.201 [2024-04-26 16:10:23.426261] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.201 [2024-04-26 16:10:23.434528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.201 [2024-04-26 16:10:23.435169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.435647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.435688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.201 [2024-04-26 16:10:23.435718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.201 [2024-04-26 16:10:23.435979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.201 [2024-04-26 16:10:23.436197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.201 [2024-04-26 16:10:23.436209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.201 [2024-04-26 16:10:23.436221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.201 [2024-04-26 16:10:23.439145] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.201 [2024-04-26 16:10:23.447598] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.201 [2024-04-26 16:10:23.448208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.448582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.448621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.201 [2024-04-26 16:10:23.448651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.201 [2024-04-26 16:10:23.449185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.201 [2024-04-26 16:10:23.449378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.201 [2024-04-26 16:10:23.449389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.201 [2024-04-26 16:10:23.449397] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.201 [2024-04-26 16:10:23.452319] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.201 [2024-04-26 16:10:23.460678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.201 [2024-04-26 16:10:23.461336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.461820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.461859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.201 [2024-04-26 16:10:23.461869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.201 [2024-04-26 16:10:23.462061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.201 [2024-04-26 16:10:23.462258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.201 [2024-04-26 16:10:23.462269] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.201 [2024-04-26 16:10:23.462278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.201 [2024-04-26 16:10:23.465195] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.201 [2024-04-26 16:10:23.473798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.201 [2024-04-26 16:10:23.474451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.474920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.474960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.201 [2024-04-26 16:10:23.474990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.201 [2024-04-26 16:10:23.475644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.201 [2024-04-26 16:10:23.476167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.201 [2024-04-26 16:10:23.476178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.201 [2024-04-26 16:10:23.476187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.201 [2024-04-26 16:10:23.479124] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.201 [2024-04-26 16:10:23.486914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.201 [2024-04-26 16:10:23.487548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.487913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.487955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.201 [2024-04-26 16:10:23.487984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.201 [2024-04-26 16:10:23.488640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.201 [2024-04-26 16:10:23.488944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.201 [2024-04-26 16:10:23.488959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.201 [2024-04-26 16:10:23.488971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.201 [2024-04-26 16:10:23.493402] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.201 [2024-04-26 16:10:23.500576] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.201 [2024-04-26 16:10:23.501179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.501606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.201 [2024-04-26 16:10:23.501646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.201 [2024-04-26 16:10:23.501676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.201 [2024-04-26 16:10:23.502203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.201 [2024-04-26 16:10:23.502395] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.201 [2024-04-26 16:10:23.502405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.201 [2024-04-26 16:10:23.502414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.201 [2024-04-26 16:10:23.505378] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.201 [2024-04-26 16:10:23.513737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.202 [2024-04-26 16:10:23.514368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.514861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.514903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.202 [2024-04-26 16:10:23.514933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.202 [2024-04-26 16:10:23.515384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.202 [2024-04-26 16:10:23.515575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.202 [2024-04-26 16:10:23.515586] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.202 [2024-04-26 16:10:23.515594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.202 [2024-04-26 16:10:23.518520] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.202 [2024-04-26 16:10:23.526850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.202 [2024-04-26 16:10:23.527523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.528021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.528061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.202 [2024-04-26 16:10:23.528104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.202 [2024-04-26 16:10:23.528481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.202 [2024-04-26 16:10:23.528671] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.202 [2024-04-26 16:10:23.528682] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.202 [2024-04-26 16:10:23.528690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.202 [2024-04-26 16:10:23.531611] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.202 [2024-04-26 16:10:23.539940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.202 [2024-04-26 16:10:23.540595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.541101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.541144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.202 [2024-04-26 16:10:23.541173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.202 [2024-04-26 16:10:23.541577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.202 [2024-04-26 16:10:23.541767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.202 [2024-04-26 16:10:23.541778] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.202 [2024-04-26 16:10:23.541786] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.202 [2024-04-26 16:10:23.544801] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.202 [2024-04-26 16:10:23.553044] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.202 [2024-04-26 16:10:23.553694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.554194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.554250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.202 [2024-04-26 16:10:23.554281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.202 [2024-04-26 16:10:23.554568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.202 [2024-04-26 16:10:23.554758] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.202 [2024-04-26 16:10:23.554769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.202 [2024-04-26 16:10:23.554778] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.202 [2024-04-26 16:10:23.557694] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.202 [2024-04-26 16:10:23.566230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.202 [2024-04-26 16:10:23.566852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.567341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.567384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.202 [2024-04-26 16:10:23.567414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.202 [2024-04-26 16:10:23.568056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.202 [2024-04-26 16:10:23.568496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.202 [2024-04-26 16:10:23.568507] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.202 [2024-04-26 16:10:23.568516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.202 [2024-04-26 16:10:23.571479] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.202 [2024-04-26 16:10:23.579476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.202 [2024-04-26 16:10:23.580067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.580497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.580538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.202 [2024-04-26 16:10:23.580569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.202 [2024-04-26 16:10:23.581212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.202 [2024-04-26 16:10:23.581404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.202 [2024-04-26 16:10:23.581415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.202 [2024-04-26 16:10:23.581423] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.202 [2024-04-26 16:10:23.584350] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.202 [2024-04-26 16:10:23.592690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.202 [2024-04-26 16:10:23.593338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.593841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.593882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.202 [2024-04-26 16:10:23.593911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.202 [2024-04-26 16:10:23.594560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.202 [2024-04-26 16:10:23.594751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.202 [2024-04-26 16:10:23.594762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.202 [2024-04-26 16:10:23.594770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.202 [2024-04-26 16:10:23.597775] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.202 [2024-04-26 16:10:23.605941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.202 [2024-04-26 16:10:23.606606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.607106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.202 [2024-04-26 16:10:23.607149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.202 [2024-04-26 16:10:23.607180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.202 [2024-04-26 16:10:23.607664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.202 [2024-04-26 16:10:23.607856] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.202 [2024-04-26 16:10:23.607867] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.607875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.610799] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.619040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.619664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.620135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.620178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.203 [2024-04-26 16:10:23.620208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.203 [2024-04-26 16:10:23.620418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.203 [2024-04-26 16:10:23.620691] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.203 [2024-04-26 16:10:23.620707] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.620718] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.625155] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.632770] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.633316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.633703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.633743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.203 [2024-04-26 16:10:23.633772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.203 [2024-04-26 16:10:23.634419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.203 [2024-04-26 16:10:23.634617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.203 [2024-04-26 16:10:23.634628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.634636] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.637761] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.646180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.646851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.647195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.647212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.203 [2024-04-26 16:10:23.647222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.203 [2024-04-26 16:10:23.647421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.203 [2024-04-26 16:10:23.647618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.203 [2024-04-26 16:10:23.647629] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.647637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.650728] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.659589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.660239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.660638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.660679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.203 [2024-04-26 16:10:23.660708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.203 [2024-04-26 16:10:23.661362] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.203 [2024-04-26 16:10:23.661796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.203 [2024-04-26 16:10:23.661807] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.661816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.664841] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.672873] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.673424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.673792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.673833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.203 [2024-04-26 16:10:23.673863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.203 [2024-04-26 16:10:23.674467] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.203 [2024-04-26 16:10:23.674664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.203 [2024-04-26 16:10:23.674675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.674684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.677718] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.686075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.686652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.686997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.687037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.203 [2024-04-26 16:10:23.687089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.203 [2024-04-26 16:10:23.687563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.203 [2024-04-26 16:10:23.687754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.203 [2024-04-26 16:10:23.687764] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.687773] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.690695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.699150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.699732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.700213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.700256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.203 [2024-04-26 16:10:23.700285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.203 [2024-04-26 16:10:23.700896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.203 [2024-04-26 16:10:23.701182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.203 [2024-04-26 16:10:23.701198] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.701210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.705643] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.712921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.713538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.713991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.714032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.203 [2024-04-26 16:10:23.714061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.203 [2024-04-26 16:10:23.714601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.203 [2024-04-26 16:10:23.714797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.203 [2024-04-26 16:10:23.714808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.203 [2024-04-26 16:10:23.714816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.203 [2024-04-26 16:10:23.717781] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.203 [2024-04-26 16:10:23.725980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.203 [2024-04-26 16:10:23.726561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.727064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.203 [2024-04-26 16:10:23.727121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.727159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.727586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.727777] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.727787] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.727796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.730736] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.739196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.204 [2024-04-26 16:10:23.739847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.740340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.740384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.740413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.740977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.741171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.741182] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.741191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.744126] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.752406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.204 [2024-04-26 16:10:23.753026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.753541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.753583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.753612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.754024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.754221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.754233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.754241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.757161] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.765511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.204 [2024-04-26 16:10:23.766154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.766649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.766690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.766719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.766998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.767194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.767205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.767213] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.770133] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.778640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.204 [2024-04-26 16:10:23.779260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.779752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.779793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.779823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.780079] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.780271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.780281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.780290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.783203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.791836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.204 [2024-04-26 16:10:23.792436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.792935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.792977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.793007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.793662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.793887] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.793898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.793906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.796858] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.804988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.204 [2024-04-26 16:10:23.805610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.806111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.806167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.806177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.806373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.806563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.806574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.806582] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.809499] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.818266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.204 [2024-04-26 16:10:23.818921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.819396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.819442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.819474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.820057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.820253] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.820264] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.820273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.823211] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.831422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.204 [2024-04-26 16:10:23.832089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.832591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.204 [2024-04-26 16:10:23.832632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.204 [2024-04-26 16:10:23.832663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.204 [2024-04-26 16:10:23.833203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.204 [2024-04-26 16:10:23.833484] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.204 [2024-04-26 16:10:23.833500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.204 [2024-04-26 16:10:23.833512] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.204 [2024-04-26 16:10:23.837944] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.204 [2024-04-26 16:10:23.845147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.205 [2024-04-26 16:10:23.845773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.205 [2024-04-26 16:10:23.846247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.205 [2024-04-26 16:10:23.846291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.205 [2024-04-26 16:10:23.846320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.205 [2024-04-26 16:10:23.846897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.205 [2024-04-26 16:10:23.847096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.205 [2024-04-26 16:10:23.847108] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.205 [2024-04-26 16:10:23.847116] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.205 [2024-04-26 16:10:23.850109] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.205 [2024-04-26 16:10:23.858286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.205 [2024-04-26 16:10:23.858937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.205 [2024-04-26 16:10:23.859437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.205 [2024-04-26 16:10:23.859481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.205 [2024-04-26 16:10:23.859512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.205 [2024-04-26 16:10:23.860084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.205 [2024-04-26 16:10:23.860275] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.205 [2024-04-26 16:10:23.860286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.205 [2024-04-26 16:10:23.860294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.205 [2024-04-26 16:10:23.863236] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.205 [2024-04-26 16:10:23.871697] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.205 [2024-04-26 16:10:23.872307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.205 [2024-04-26 16:10:23.872756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.205 [2024-04-26 16:10:23.872778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.205 [2024-04-26 16:10:23.872790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.205 [2024-04-26 16:10:23.872994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.205 [2024-04-26 16:10:23.873199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.205 [2024-04-26 16:10:23.873211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.205 [2024-04-26 16:10:23.873220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.465 [2024-04-26 16:10:23.876360] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.465 [2024-04-26 16:10:23.884779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.465 [2024-04-26 16:10:23.885430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.885935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.885978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.465 [2024-04-26 16:10:23.886009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.465 [2024-04-26 16:10:23.886328] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.465 [2024-04-26 16:10:23.886526] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.465 [2024-04-26 16:10:23.886540] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.465 [2024-04-26 16:10:23.886550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.465 [2024-04-26 16:10:23.889687] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.465 [2024-04-26 16:10:23.898141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.465 [2024-04-26 16:10:23.898788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.899161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.899207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.465 [2024-04-26 16:10:23.899240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.465 [2024-04-26 16:10:23.899884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.465 [2024-04-26 16:10:23.900150] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.465 [2024-04-26 16:10:23.900162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.465 [2024-04-26 16:10:23.900170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.465 [2024-04-26 16:10:23.903173] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.465 [2024-04-26 16:10:23.911302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.465 [2024-04-26 16:10:23.911942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.912418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.912474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.465 [2024-04-26 16:10:23.912504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.465 [2024-04-26 16:10:23.913024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.465 [2024-04-26 16:10:23.913311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.465 [2024-04-26 16:10:23.913327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.465 [2024-04-26 16:10:23.913339] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.465 [2024-04-26 16:10:23.917770] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.465 [2024-04-26 16:10:23.924989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.465 [2024-04-26 16:10:23.925644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.926141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.926184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.465 [2024-04-26 16:10:23.926215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.465 [2024-04-26 16:10:23.926858] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.465 [2024-04-26 16:10:23.927147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.465 [2024-04-26 16:10:23.927158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.465 [2024-04-26 16:10:23.927170] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.465 [2024-04-26 16:10:23.930152] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.465 [2024-04-26 16:10:23.938120] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.465 [2024-04-26 16:10:23.938734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.939181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.465 [2024-04-26 16:10:23.939196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.465 [2024-04-26 16:10:23.939205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.465 [2024-04-26 16:10:23.939398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.465 [2024-04-26 16:10:23.939588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.465 [2024-04-26 16:10:23.939600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.465 [2024-04-26 16:10:23.939608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.465 [2024-04-26 16:10:23.942570] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:23.951300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:23.951930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:23.952385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:23.952428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:23.952458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.466 [2024-04-26 16:10:23.953114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.466 [2024-04-26 16:10:23.953523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.466 [2024-04-26 16:10:23.953534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.466 [2024-04-26 16:10:23.953542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.466 [2024-04-26 16:10:23.957791] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:23.965479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:23.966125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:23.966553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:23.966594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:23.966624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.466 [2024-04-26 16:10:23.967143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.466 [2024-04-26 16:10:23.967333] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.466 [2024-04-26 16:10:23.967344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.466 [2024-04-26 16:10:23.967355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.466 [2024-04-26 16:10:23.970313] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:23.978532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:23.979152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:23.979645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:23.979685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:23.979715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.466 [2024-04-26 16:10:23.980231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.466 [2024-04-26 16:10:23.980423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.466 [2024-04-26 16:10:23.980433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.466 [2024-04-26 16:10:23.980442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.466 [2024-04-26 16:10:23.983359] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:23.991686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:23.992314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:23.992753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:23.992793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:23.992823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.466 [2024-04-26 16:10:23.993313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.466 [2024-04-26 16:10:23.993505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.466 [2024-04-26 16:10:23.993515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.466 [2024-04-26 16:10:23.993523] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.466 [2024-04-26 16:10:23.996563] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:24.004840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:24.005470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.005969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.006010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:24.006040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.466 [2024-04-26 16:10:24.006327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.466 [2024-04-26 16:10:24.006519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.466 [2024-04-26 16:10:24.006529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.466 [2024-04-26 16:10:24.006538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.466 [2024-04-26 16:10:24.009460] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:24.017950] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:24.018603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.019019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.019060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:24.019104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.466 [2024-04-26 16:10:24.019573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.466 [2024-04-26 16:10:24.019763] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.466 [2024-04-26 16:10:24.019774] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.466 [2024-04-26 16:10:24.019782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.466 [2024-04-26 16:10:24.022700] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:24.031004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:24.031650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.032080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.032094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:24.032104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.466 [2024-04-26 16:10:24.032296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.466 [2024-04-26 16:10:24.032487] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.466 [2024-04-26 16:10:24.032498] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.466 [2024-04-26 16:10:24.032506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.466 [2024-04-26 16:10:24.035423] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:24.044196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:24.044834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.045312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.045355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:24.045386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.466 [2024-04-26 16:10:24.045673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.466 [2024-04-26 16:10:24.045870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.466 [2024-04-26 16:10:24.045881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.466 [2024-04-26 16:10:24.045889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.466 [2024-04-26 16:10:24.048848] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.466 [2024-04-26 16:10:24.057407] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.466 [2024-04-26 16:10:24.058059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.058548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.466 [2024-04-26 16:10:24.058590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.466 [2024-04-26 16:10:24.058620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.467 [2024-04-26 16:10:24.059274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.467 [2024-04-26 16:10:24.059749] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.467 [2024-04-26 16:10:24.059770] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.467 [2024-04-26 16:10:24.059779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.467 [2024-04-26 16:10:24.062769] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.467 [2024-04-26 16:10:24.070600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.467 [2024-04-26 16:10:24.071402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.071820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.071866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.467 [2024-04-26 16:10:24.071876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.467 [2024-04-26 16:10:24.072057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.467 [2024-04-26 16:10:24.072269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.467 [2024-04-26 16:10:24.072281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.467 [2024-04-26 16:10:24.072289] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.467 [2024-04-26 16:10:24.075292] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.467 [2024-04-26 16:10:24.083925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.467 [2024-04-26 16:10:24.084526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.084941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.084982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.467 [2024-04-26 16:10:24.085012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.467 [2024-04-26 16:10:24.085568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.467 [2024-04-26 16:10:24.085849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.467 [2024-04-26 16:10:24.085864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.467 [2024-04-26 16:10:24.085876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.467 [2024-04-26 16:10:24.090316] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.467 [2024-04-26 16:10:24.097756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.467 [2024-04-26 16:10:24.098375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.098828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.098871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.467 [2024-04-26 16:10:24.098901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.467 [2024-04-26 16:10:24.099555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.467 [2024-04-26 16:10:24.099954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.467 [2024-04-26 16:10:24.099964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.467 [2024-04-26 16:10:24.099973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.467 [2024-04-26 16:10:24.102936] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.467 [2024-04-26 16:10:24.111199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.467 [2024-04-26 16:10:24.111784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.112261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.112304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.467 [2024-04-26 16:10:24.112333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.467 [2024-04-26 16:10:24.112928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.467 [2024-04-26 16:10:24.113130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.467 [2024-04-26 16:10:24.113142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.467 [2024-04-26 16:10:24.113151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.467 [2024-04-26 16:10:24.116155] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.467 [2024-04-26 16:10:24.124459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.467 [2024-04-26 16:10:24.125124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.125417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.125431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.467 [2024-04-26 16:10:24.125440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.467 [2024-04-26 16:10:24.125633] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.467 [2024-04-26 16:10:24.125823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.467 [2024-04-26 16:10:24.125834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.467 [2024-04-26 16:10:24.125842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.467 [2024-04-26 16:10:24.128846] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.467 [2024-04-26 16:10:24.137692] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.467 [2024-04-26 16:10:24.138365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.138716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.467 [2024-04-26 16:10:24.138756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.467 [2024-04-26 16:10:24.138796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.467 [2024-04-26 16:10:24.138989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.467 [2024-04-26 16:10:24.139195] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.467 [2024-04-26 16:10:24.139207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.467 [2024-04-26 16:10:24.139216] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.467 [2024-04-26 16:10:24.142370] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.728 [2024-04-26 16:10:24.151192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.728 [2024-04-26 16:10:24.151871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.152248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.152308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.728 [2024-04-26 16:10:24.152359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.728 [2024-04-26 16:10:24.152944] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.728 [2024-04-26 16:10:24.153152] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.728 [2024-04-26 16:10:24.153165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.728 [2024-04-26 16:10:24.153174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.728 [2024-04-26 16:10:24.156227] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.728 [2024-04-26 16:10:24.164460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.728 [2024-04-26 16:10:24.165135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.165499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.165541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.728 [2024-04-26 16:10:24.165572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.728 [2024-04-26 16:10:24.166038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.728 [2024-04-26 16:10:24.166326] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.728 [2024-04-26 16:10:24.166342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.728 [2024-04-26 16:10:24.166354] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.728 [2024-04-26 16:10:24.170795] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.728 [2024-04-26 16:10:24.178607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.728 [2024-04-26 16:10:24.179260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.179561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.179575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.728 [2024-04-26 16:10:24.179585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.728 [2024-04-26 16:10:24.179777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.728 [2024-04-26 16:10:24.179967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.728 [2024-04-26 16:10:24.179978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.728 [2024-04-26 16:10:24.179986] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.728 [2024-04-26 16:10:24.182995] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.728 [2024-04-26 16:10:24.192112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.728 [2024-04-26 16:10:24.192896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.193297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.193312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.728 [2024-04-26 16:10:24.193321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.728 [2024-04-26 16:10:24.193519] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.728 [2024-04-26 16:10:24.193715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.728 [2024-04-26 16:10:24.193726] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.728 [2024-04-26 16:10:24.193734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.728 [2024-04-26 16:10:24.196831] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.728 [2024-04-26 16:10:24.205605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.728 [2024-04-26 16:10:24.206253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.206616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.728 [2024-04-26 16:10:24.206629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.728 [2024-04-26 16:10:24.206639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.728 [2024-04-26 16:10:24.206836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.207033] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.207044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.207053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.210149] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.219135] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.729 [2024-04-26 16:10:24.219720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.220139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.220157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.729 [2024-04-26 16:10:24.220166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.729 [2024-04-26 16:10:24.220371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.220573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.220584] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.220593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.223825] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.232777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.729 [2024-04-26 16:10:24.233460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.233879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.233893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.729 [2024-04-26 16:10:24.233903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.729 [2024-04-26 16:10:24.234121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.234324] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.234342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.234350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.237538] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.246237] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.729 [2024-04-26 16:10:24.246900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.247323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.247338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.729 [2024-04-26 16:10:24.247348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.729 [2024-04-26 16:10:24.247553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.247755] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.247766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.247775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.250967] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.259761] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.729 [2024-04-26 16:10:24.260431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.260770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.260784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.729 [2024-04-26 16:10:24.260798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.729 [2024-04-26 16:10:24.261001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.261208] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.261220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.261229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.264413] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.273282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.729 [2024-04-26 16:10:24.273893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.274193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.274209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.729 [2024-04-26 16:10:24.274219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.729 [2024-04-26 16:10:24.274429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.274625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.274635] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.274644] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.277734] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.286687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.729 [2024-04-26 16:10:24.287320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.287725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.287739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.729 [2024-04-26 16:10:24.287749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.729 [2024-04-26 16:10:24.287947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.288149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.288165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.288173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.291324] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.300173] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.729 [2024-04-26 16:10:24.300802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.301005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.301019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.729 [2024-04-26 16:10:24.301029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.729 [2024-04-26 16:10:24.301242] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.301446] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.301458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.301467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.304653] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.313700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.729 [2024-04-26 16:10:24.314374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.314726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.729 [2024-04-26 16:10:24.314741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.729 [2024-04-26 16:10:24.314752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.729 [2024-04-26 16:10:24.314969] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.729 [2024-04-26 16:10:24.315192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.729 [2024-04-26 16:10:24.315204] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.729 [2024-04-26 16:10:24.315214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.729 [2024-04-26 16:10:24.318413] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.729 [2024-04-26 16:10:24.327459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.730 [2024-04-26 16:10:24.328132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.328554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.328569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.730 [2024-04-26 16:10:24.328580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.730 [2024-04-26 16:10:24.328796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.730 [2024-04-26 16:10:24.329011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.730 [2024-04-26 16:10:24.329023] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.730 [2024-04-26 16:10:24.329032] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.730 [2024-04-26 16:10:24.332431] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.730 [2024-04-26 16:10:24.341182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.730 [2024-04-26 16:10:24.341850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.342246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.342262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.730 [2024-04-26 16:10:24.342272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.730 [2024-04-26 16:10:24.342492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.730 [2024-04-26 16:10:24.342708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.730 [2024-04-26 16:10:24.342719] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.730 [2024-04-26 16:10:24.342728] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.730 [2024-04-26 16:10:24.346143] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.730 [2024-04-26 16:10:24.354892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.730 [2024-04-26 16:10:24.355569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.355915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.355930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.730 [2024-04-26 16:10:24.355941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.730 [2024-04-26 16:10:24.356163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.730 [2024-04-26 16:10:24.356381] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.730 [2024-04-26 16:10:24.356393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.730 [2024-04-26 16:10:24.356402] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.730 [2024-04-26 16:10:24.359803] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.730 [2024-04-26 16:10:24.368626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.730 [2024-04-26 16:10:24.369270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.369665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.369679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.730 [2024-04-26 16:10:24.369690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.730 [2024-04-26 16:10:24.369907] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.730 [2024-04-26 16:10:24.370127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.730 [2024-04-26 16:10:24.370139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.730 [2024-04-26 16:10:24.370149] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.730 [2024-04-26 16:10:24.373549] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.730 [2024-04-26 16:10:24.382196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.730 [2024-04-26 16:10:24.382690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.383093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.383110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.730 [2024-04-26 16:10:24.383120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.730 [2024-04-26 16:10:24.383343] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.730 [2024-04-26 16:10:24.383550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.730 [2024-04-26 16:10:24.383561] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.730 [2024-04-26 16:10:24.383570] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.730 [2024-04-26 16:10:24.386892] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.730 [2024-04-26 16:10:24.395748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.730 [2024-04-26 16:10:24.396388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.396713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.730 [2024-04-26 16:10:24.396727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.730 [2024-04-26 16:10:24.396736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.730 [2024-04-26 16:10:24.396940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.730 [2024-04-26 16:10:24.397148] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.730 [2024-04-26 16:10:24.397159] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.730 [2024-04-26 16:10:24.397168] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.730 [2024-04-26 16:10:24.400358] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.991 [2024-04-26 16:10:24.409297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.991 [2024-04-26 16:10:24.409972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.410372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.410395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.991 [2024-04-26 16:10:24.410406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.991 [2024-04-26 16:10:24.410612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.991 [2024-04-26 16:10:24.410815] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.991 [2024-04-26 16:10:24.410826] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.991 [2024-04-26 16:10:24.410835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.991 [2024-04-26 16:10:24.414028] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.991 [2024-04-26 16:10:24.422864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.991 [2024-04-26 16:10:24.423490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.423970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.424013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.991 [2024-04-26 16:10:24.424044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.991 [2024-04-26 16:10:24.424584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.991 [2024-04-26 16:10:24.424787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.991 [2024-04-26 16:10:24.424803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.991 [2024-04-26 16:10:24.424812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.991 [2024-04-26 16:10:24.427991] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.991 [2024-04-26 16:10:24.436245] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.991 [2024-04-26 16:10:24.436921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.437291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.437342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.991 [2024-04-26 16:10:24.437352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.991 [2024-04-26 16:10:24.437553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.991 [2024-04-26 16:10:24.437752] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.991 [2024-04-26 16:10:24.437763] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.991 [2024-04-26 16:10:24.437772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.991 [2024-04-26 16:10:24.440869] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.991 [2024-04-26 16:10:24.449428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.991 [2024-04-26 16:10:24.450106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.450584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.450626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.991 [2024-04-26 16:10:24.450656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.991 [2024-04-26 16:10:24.451273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.991 [2024-04-26 16:10:24.451465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.991 [2024-04-26 16:10:24.451476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.991 [2024-04-26 16:10:24.451484] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.991 [2024-04-26 16:10:24.454440] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.991 [2024-04-26 16:10:24.462629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.991 [2024-04-26 16:10:24.463244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.463675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.463716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.991 [2024-04-26 16:10:24.463746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.991 [2024-04-26 16:10:24.464287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.991 [2024-04-26 16:10:24.464479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.991 [2024-04-26 16:10:24.464493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.991 [2024-04-26 16:10:24.464502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.991 [2024-04-26 16:10:24.467444] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.991 [2024-04-26 16:10:24.475723] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.991 [2024-04-26 16:10:24.476366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.476821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.476863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.991 [2024-04-26 16:10:24.476893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.991 [2024-04-26 16:10:24.477344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.991 [2024-04-26 16:10:24.477536] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.991 [2024-04-26 16:10:24.477547] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.991 [2024-04-26 16:10:24.477555] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.991 [2024-04-26 16:10:24.480516] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.991 [2024-04-26 16:10:24.489124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.991 [2024-04-26 16:10:24.489773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.490189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.991 [2024-04-26 16:10:24.490204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.991 [2024-04-26 16:10:24.490214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.991 [2024-04-26 16:10:24.490412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.991 [2024-04-26 16:10:24.490608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.991 [2024-04-26 16:10:24.490619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.991 [2024-04-26 16:10:24.490628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.493556] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.502266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.502919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.503271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.503314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.992 [2024-04-26 16:10:24.503344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.992 [2024-04-26 16:10:24.503857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.992 [2024-04-26 16:10:24.504048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.992 [2024-04-26 16:10:24.504059] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.992 [2024-04-26 16:10:24.504075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.507061] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.515701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.516336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.516792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.516833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.992 [2024-04-26 16:10:24.516862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.992 [2024-04-26 16:10:24.517518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.992 [2024-04-26 16:10:24.517991] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.992 [2024-04-26 16:10:24.518003] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.992 [2024-04-26 16:10:24.518011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.521063] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.528766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.529433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.529905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.529946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.992 [2024-04-26 16:10:24.529976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.992 [2024-04-26 16:10:24.530493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.992 [2024-04-26 16:10:24.530774] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.992 [2024-04-26 16:10:24.530789] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.992 [2024-04-26 16:10:24.530801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.535235] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.542654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.543327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.543736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.543776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.992 [2024-04-26 16:10:24.543807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.992 [2024-04-26 16:10:24.544465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.992 [2024-04-26 16:10:24.544838] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.992 [2024-04-26 16:10:24.544849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.992 [2024-04-26 16:10:24.544858] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.547921] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.555782] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.556425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.556868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.556909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.992 [2024-04-26 16:10:24.556939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.992 [2024-04-26 16:10:24.557595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.992 [2024-04-26 16:10:24.557944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.992 [2024-04-26 16:10:24.557955] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.992 [2024-04-26 16:10:24.557963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.560886] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.568854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.569543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.569907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.569953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.992 [2024-04-26 16:10:24.569984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.992 [2024-04-26 16:10:24.570288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.992 [2024-04-26 16:10:24.570481] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.992 [2024-04-26 16:10:24.570492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.992 [2024-04-26 16:10:24.570500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.573419] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.581919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.582611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.583094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.583137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.992 [2024-04-26 16:10:24.583166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.992 [2024-04-26 16:10:24.583808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.992 [2024-04-26 16:10:24.584173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.992 [2024-04-26 16:10:24.584184] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.992 [2024-04-26 16:10:24.584193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.587130] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.595016] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.595694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.596525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.596587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.992 [2024-04-26 16:10:24.596600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.992 [2024-04-26 16:10:24.596799] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.992 [2024-04-26 16:10:24.596991] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.992 [2024-04-26 16:10:24.597002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.992 [2024-04-26 16:10:24.597011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.992 [2024-04-26 16:10:24.600012] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.992 [2024-04-26 16:10:24.608386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.992 [2024-04-26 16:10:24.609033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.609544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.992 [2024-04-26 16:10:24.609587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.993 [2024-04-26 16:10:24.609617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.993 [2024-04-26 16:10:24.610121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.993 [2024-04-26 16:10:24.610328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.993 [2024-04-26 16:10:24.610340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.993 [2024-04-26 16:10:24.610348] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.993 [2024-04-26 16:10:24.613352] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.993 [2024-04-26 16:10:24.621611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.993 [2024-04-26 16:10:24.622239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.993 [2024-04-26 16:10:24.622641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.993 [2024-04-26 16:10:24.622682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.993 [2024-04-26 16:10:24.622712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.993 [2024-04-26 16:10:24.622987] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.993 [2024-04-26 16:10:24.623194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.993 [2024-04-26 16:10:24.623206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.993 [2024-04-26 16:10:24.623214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.993 [2024-04-26 16:10:24.626132] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.993 [2024-04-26 16:10:24.634738] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.993 [2024-04-26 16:10:24.635413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.993 [2024-04-26 16:10:24.635841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.993 [2024-04-26 16:10:24.635883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.993 [2024-04-26 16:10:24.635913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.993 [2024-04-26 16:10:24.636573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.993 [2024-04-26 16:10:24.637073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.993 [2024-04-26 16:10:24.637085] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.993 [2024-04-26 16:10:24.637094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.993 [2024-04-26 16:10:24.640019] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.993 [2024-04-26 16:10:24.648002] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.993 [2024-04-26 16:10:24.648701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.993 [2024-04-26 16:10:24.649173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.993 [2024-04-26 16:10:24.649219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.993 [2024-04-26 16:10:24.649250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.993 [2024-04-26 16:10:24.649893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.993 [2024-04-26 16:10:24.650304] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.993 [2024-04-26 16:10:24.650322] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.993 [2024-04-26 16:10:24.650331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.993 [2024-04-26 16:10:24.653445] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:44.993 [2024-04-26 16:10:24.661525] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:44.993 [2024-04-26 16:10:24.662153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.993 [2024-04-26 16:10:24.662567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:44.993 [2024-04-26 16:10:24.662580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:44.993 [2024-04-26 16:10:24.662590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:44.993 [2024-04-26 16:10:24.662789] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:44.993 [2024-04-26 16:10:24.662985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:44.993 [2024-04-26 16:10:24.662996] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:44.993 [2024-04-26 16:10:24.663005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:44.993 [2024-04-26 16:10:24.666102] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.254 [2024-04-26 16:10:24.674938] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.254 [2024-04-26 16:10:24.675598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.675947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.675962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.254 [2024-04-26 16:10:24.675972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.254 [2024-04-26 16:10:24.676189] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.254 [2024-04-26 16:10:24.676388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.254 [2024-04-26 16:10:24.676399] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.254 [2024-04-26 16:10:24.676408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.254 [2024-04-26 16:10:24.679541] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.254 [2024-04-26 16:10:24.688048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.254 [2024-04-26 16:10:24.688696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.689171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.689186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.254 [2024-04-26 16:10:24.689196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.254 [2024-04-26 16:10:24.689390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.254 [2024-04-26 16:10:24.689581] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.254 [2024-04-26 16:10:24.689591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.254 [2024-04-26 16:10:24.689600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.254 [2024-04-26 16:10:24.692487] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.254 [2024-04-26 16:10:24.701168] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.254 [2024-04-26 16:10:24.701824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.702297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.702342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.254 [2024-04-26 16:10:24.702372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.254 [2024-04-26 16:10:24.703015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.254 [2024-04-26 16:10:24.703404] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.254 [2024-04-26 16:10:24.703420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.254 [2024-04-26 16:10:24.703432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.254 [2024-04-26 16:10:24.707864] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.254 [2024-04-26 16:10:24.714728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.254 [2024-04-26 16:10:24.715364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.715852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.715900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.254 [2024-04-26 16:10:24.715911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.254 [2024-04-26 16:10:24.716109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.254 [2024-04-26 16:10:24.716300] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.254 [2024-04-26 16:10:24.716311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.254 [2024-04-26 16:10:24.716320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.254 [2024-04-26 16:10:24.719322] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.254 [2024-04-26 16:10:24.727875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.254 [2024-04-26 16:10:24.728550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.728955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.728996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.254 [2024-04-26 16:10:24.729026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.254 [2024-04-26 16:10:24.729274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.254 [2024-04-26 16:10:24.729465] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.254 [2024-04-26 16:10:24.729476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.254 [2024-04-26 16:10:24.729484] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.254 [2024-04-26 16:10:24.732471] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.254 [2024-04-26 16:10:24.740973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.254 [2024-04-26 16:10:24.741622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.742042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.742055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.254 [2024-04-26 16:10:24.742064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.254 [2024-04-26 16:10:24.742264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.254 [2024-04-26 16:10:24.742455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.254 [2024-04-26 16:10:24.742466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.254 [2024-04-26 16:10:24.742474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.254 [2024-04-26 16:10:24.745400] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.254 [2024-04-26 16:10:24.754118] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.254 [2024-04-26 16:10:24.754789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.755185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.755203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.254 [2024-04-26 16:10:24.755213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.254 [2024-04-26 16:10:24.755406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.254 [2024-04-26 16:10:24.755599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.254 [2024-04-26 16:10:24.755610] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.254 [2024-04-26 16:10:24.755618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.254 [2024-04-26 16:10:24.758500] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.254 [2024-04-26 16:10:24.767319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.254 [2024-04-26 16:10:24.767965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.768444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.254 [2024-04-26 16:10:24.768487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.254 [2024-04-26 16:10:24.768517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.254 [2024-04-26 16:10:24.769172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.254 [2024-04-26 16:10:24.769488] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.254 [2024-04-26 16:10:24.769499] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.769507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.772462] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.780514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.781195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.781673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.781714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.255 [2024-04-26 16:10:24.781744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.255 [2024-04-26 16:10:24.782223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.255 [2024-04-26 16:10:24.782415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.255 [2024-04-26 16:10:24.782426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.782434] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.785328] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.793611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.794250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.794724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.794764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.255 [2024-04-26 16:10:24.794802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.255 [2024-04-26 16:10:24.795414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.255 [2024-04-26 16:10:24.795605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.255 [2024-04-26 16:10:24.795616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.795624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.798643] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.806786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.807420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.807780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.807821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.255 [2024-04-26 16:10:24.807851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.255 [2024-04-26 16:10:24.808391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.255 [2024-04-26 16:10:24.808583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.255 [2024-04-26 16:10:24.808594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.808603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.811521] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.820029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.820471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.820872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.820916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.255 [2024-04-26 16:10:24.820947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.255 [2024-04-26 16:10:24.821424] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.255 [2024-04-26 16:10:24.821617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.255 [2024-04-26 16:10:24.821628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.821637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.824602] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.833088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.833736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.834207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.834264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.255 [2024-04-26 16:10:24.834277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.255 [2024-04-26 16:10:24.834469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.255 [2024-04-26 16:10:24.834660] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.255 [2024-04-26 16:10:24.834670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.834678] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.837562] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.846217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.846881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.847386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.847429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.255 [2024-04-26 16:10:24.847459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.255 [2024-04-26 16:10:24.848113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.255 [2024-04-26 16:10:24.848632] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.255 [2024-04-26 16:10:24.848643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.848651] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.851611] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.859329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.859961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.860423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.860467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.255 [2024-04-26 16:10:24.860497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.255 [2024-04-26 16:10:24.861152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.255 [2024-04-26 16:10:24.861619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.255 [2024-04-26 16:10:24.861630] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.861638] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.864599] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.872484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.873091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.873547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.873582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.255 [2024-04-26 16:10:24.873592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.255 [2024-04-26 16:10:24.873787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.255 [2024-04-26 16:10:24.873977] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.255 [2024-04-26 16:10:24.873988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.255 [2024-04-26 16:10:24.873997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.255 [2024-04-26 16:10:24.876918] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.255 [2024-04-26 16:10:24.885611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.255 [2024-04-26 16:10:24.886284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.255 [2024-04-26 16:10:24.886583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.256 [2024-04-26 16:10:24.886624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.256 [2024-04-26 16:10:24.886653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.256 [2024-04-26 16:10:24.887314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.256 [2024-04-26 16:10:24.887792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.256 [2024-04-26 16:10:24.887803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.256 [2024-04-26 16:10:24.887811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.256 [2024-04-26 16:10:24.890732] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.256 [2024-04-26 16:10:24.898736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.256 [2024-04-26 16:10:24.899379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.256 [2024-04-26 16:10:24.899763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.256 [2024-04-26 16:10:24.899805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.256 [2024-04-26 16:10:24.899834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.256 [2024-04-26 16:10:24.900485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.256 [2024-04-26 16:10:24.900696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.256 [2024-04-26 16:10:24.900707] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.256 [2024-04-26 16:10:24.900716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.256 [2024-04-26 16:10:24.903812] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.256 [2024-04-26 16:10:24.912029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.256 [2024-04-26 16:10:24.912699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.256 [2024-04-26 16:10:24.913047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.256 [2024-04-26 16:10:24.913060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.256 [2024-04-26 16:10:24.913075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.256 [2024-04-26 16:10:24.913267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.256 [2024-04-26 16:10:24.913461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.256 [2024-04-26 16:10:24.913481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.256 [2024-04-26 16:10:24.913490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.256 [2024-04-26 16:10:24.916495] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.256 [2024-04-26 16:10:24.925391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.256 [2024-04-26 16:10:24.926049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.256 [2024-04-26 16:10:24.926453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.256 [2024-04-26 16:10:24.926494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.256 [2024-04-26 16:10:24.926523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.256 [2024-04-26 16:10:24.926749] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.256 [2024-04-26 16:10:24.926940] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.256 [2024-04-26 16:10:24.926950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.256 [2024-04-26 16:10:24.926959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.256 [2024-04-26 16:10:24.930006] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.517 [2024-04-26 16:10:24.938766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.517 [2024-04-26 16:10:24.939418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.939847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.939891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.517 [2024-04-26 16:10:24.939924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.517 [2024-04-26 16:10:24.940570] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.517 [2024-04-26 16:10:24.940773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.517 [2024-04-26 16:10:24.940784] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.517 [2024-04-26 16:10:24.940793] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.517 [2024-04-26 16:10:24.943839] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.517 [2024-04-26 16:10:24.951822] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.517 [2024-04-26 16:10:24.952473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.952966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.953009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.517 [2024-04-26 16:10:24.953041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.517 [2024-04-26 16:10:24.953701] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.517 [2024-04-26 16:10:24.953990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.517 [2024-04-26 16:10:24.954001] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.517 [2024-04-26 16:10:24.954009] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.517 [2024-04-26 16:10:24.956932] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.517 [2024-04-26 16:10:24.964898] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.517 [2024-04-26 16:10:24.965575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.966050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.966109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.517 [2024-04-26 16:10:24.966141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.517 [2024-04-26 16:10:24.966572] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.517 [2024-04-26 16:10:24.966763] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.517 [2024-04-26 16:10:24.966774] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.517 [2024-04-26 16:10:24.966782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.517 [2024-04-26 16:10:24.969704] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.517 [2024-04-26 16:10:24.978122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.517 [2024-04-26 16:10:24.978784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.979185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.979229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.517 [2024-04-26 16:10:24.979260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.517 [2024-04-26 16:10:24.979863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.517 [2024-04-26 16:10:24.980054] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.517 [2024-04-26 16:10:24.980065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.517 [2024-04-26 16:10:24.980078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.517 [2024-04-26 16:10:24.982993] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.517 [2024-04-26 16:10:24.991226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.517 [2024-04-26 16:10:24.991904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.992377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:24.992421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.517 [2024-04-26 16:10:24.992452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.517 [2024-04-26 16:10:24.992972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.517 [2024-04-26 16:10:24.993168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.517 [2024-04-26 16:10:24.993182] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.517 [2024-04-26 16:10:24.993191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.517 [2024-04-26 16:10:24.996040] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.517 [2024-04-26 16:10:25.004374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.517 [2024-04-26 16:10:25.005045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:25.005532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:25.005573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.517 [2024-04-26 16:10:25.005604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.517 [2024-04-26 16:10:25.005951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.517 [2024-04-26 16:10:25.006238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.517 [2024-04-26 16:10:25.006254] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.517 [2024-04-26 16:10:25.006266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.517 [2024-04-26 16:10:25.010693] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.517 [2024-04-26 16:10:25.018135] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.517 [2024-04-26 16:10:25.018790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:25.019311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:25.019360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.517 [2024-04-26 16:10:25.019391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.517 [2024-04-26 16:10:25.020017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.517 [2024-04-26 16:10:25.020218] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.517 [2024-04-26 16:10:25.020230] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.517 [2024-04-26 16:10:25.020238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.517 [2024-04-26 16:10:25.023256] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.517 [2024-04-26 16:10:25.031216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.517 [2024-04-26 16:10:25.031844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:25.032255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.517 [2024-04-26 16:10:25.032299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.032330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.032912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.033108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.033119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.033131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.518 [2024-04-26 16:10:25.035983] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.518 [2024-04-26 16:10:25.044353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.518 [2024-04-26 16:10:25.044938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.045412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.045466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.045497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.046076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.046356] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.046372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.046384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.518 [2024-04-26 16:10:25.050817] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.518 [2024-04-26 16:10:25.058020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.518 [2024-04-26 16:10:25.058683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.059127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.059173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.059204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.059846] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.060036] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.060047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.060056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.518 [2024-04-26 16:10:25.063017] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.518 [2024-04-26 16:10:25.071210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.518 [2024-04-26 16:10:25.071859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.072282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.072325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.072356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.072895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.073090] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.073101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.073113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.518 [2024-04-26 16:10:25.075963] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.518 [2024-04-26 16:10:25.084432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.518 [2024-04-26 16:10:25.085086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.085561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.085601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.085631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.086164] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.086355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.086366] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.086375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.518 [2024-04-26 16:10:25.089269] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.518 [2024-04-26 16:10:25.097566] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.518 [2024-04-26 16:10:25.098186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.098590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.098629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.098659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.099213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.099405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.099415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.099424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.518 [2024-04-26 16:10:25.102400] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.518 [2024-04-26 16:10:25.110698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.518 [2024-04-26 16:10:25.111307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.111732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.111773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.111803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.112345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.112537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.112547] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.112555] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.518 [2024-04-26 16:10:25.115446] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.518 [2024-04-26 16:10:25.123878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.518 [2024-04-26 16:10:25.124525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.124880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.124920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.124950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.125609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.126221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.126233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.126241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.518 [2024-04-26 16:10:25.129177] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.518 [2024-04-26 16:10:25.136961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.518 [2024-04-26 16:10:25.137640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.138055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.518 [2024-04-26 16:10:25.138068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.518 [2024-04-26 16:10:25.138083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.518 [2024-04-26 16:10:25.138276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.518 [2024-04-26 16:10:25.138466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.518 [2024-04-26 16:10:25.138477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.518 [2024-04-26 16:10:25.138485] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.519 [2024-04-26 16:10:25.141482] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.519 [2024-04-26 16:10:25.150232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.519 [2024-04-26 16:10:25.150916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-04-26 16:10:25.151389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-04-26 16:10:25.151434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.519 [2024-04-26 16:10:25.151464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.519 [2024-04-26 16:10:25.152121] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.519 [2024-04-26 16:10:25.152513] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.519 [2024-04-26 16:10:25.152524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.519 [2024-04-26 16:10:25.152532] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.519 [2024-04-26 16:10:25.155623] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.519 [2024-04-26 16:10:25.163618] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.519 [2024-04-26 16:10:25.164256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-04-26 16:10:25.164734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-04-26 16:10:25.164775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.519 [2024-04-26 16:10:25.164806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.519 [2024-04-26 16:10:25.165250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.519 [2024-04-26 16:10:25.165442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.519 [2024-04-26 16:10:25.165452] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.519 [2024-04-26 16:10:25.165461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.519 [2024-04-26 16:10:25.168457] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.519 [2024-04-26 16:10:25.176753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.519 [2024-04-26 16:10:25.177412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-04-26 16:10:25.177815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-04-26 16:10:25.177856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.519 [2024-04-26 16:10:25.177886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.519 [2024-04-26 16:10:25.178528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.519 [2024-04-26 16:10:25.178809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.519 [2024-04-26 16:10:25.178825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.519 [2024-04-26 16:10:25.178837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.519 [2024-04-26 16:10:25.183271] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.519 [2024-04-26 16:10:25.190564] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.519 [2024-04-26 16:10:25.191184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-04-26 16:10:25.191422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.519 [2024-04-26 16:10:25.191435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.519 [2024-04-26 16:10:25.191445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.519 [2024-04-26 16:10:25.191636] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.519 [2024-04-26 16:10:25.191827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.519 [2024-04-26 16:10:25.191837] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.519 [2024-04-26 16:10:25.191846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.519 [2024-04-26 16:10:25.194997] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.780 [2024-04-26 16:10:25.203832] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.780 [2024-04-26 16:10:25.204548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.204974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.204991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.780 [2024-04-26 16:10:25.205003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.780 [2024-04-26 16:10:25.205213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.780 [2024-04-26 16:10:25.205410] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.780 [2024-04-26 16:10:25.205422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.780 [2024-04-26 16:10:25.205431] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.780 [2024-04-26 16:10:25.208410] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.780 [2024-04-26 16:10:25.216924] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.780 [2024-04-26 16:10:25.217598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.218093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.218136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.780 [2024-04-26 16:10:25.218167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.780 [2024-04-26 16:10:25.218664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.780 [2024-04-26 16:10:25.218944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.780 [2024-04-26 16:10:25.218959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.780 [2024-04-26 16:10:25.218971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.780 [2024-04-26 16:10:25.223408] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.780 [2024-04-26 16:10:25.230739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.780 [2024-04-26 16:10:25.231385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.231814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.231857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.780 [2024-04-26 16:10:25.231888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.780 [2024-04-26 16:10:25.232397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.780 [2024-04-26 16:10:25.232588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.780 [2024-04-26 16:10:25.232599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.780 [2024-04-26 16:10:25.232607] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.780 [2024-04-26 16:10:25.235608] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.780 [2024-04-26 16:10:25.243827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.780 [2024-04-26 16:10:25.244482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.244965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.245005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.780 [2024-04-26 16:10:25.245035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.780 [2024-04-26 16:10:25.245554] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.780 [2024-04-26 16:10:25.245745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.780 [2024-04-26 16:10:25.245756] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.780 [2024-04-26 16:10:25.245764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.780 [2024-04-26 16:10:25.248720] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.780 [2024-04-26 16:10:25.256889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.780 [2024-04-26 16:10:25.257553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.258005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.258085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.780 [2024-04-26 16:10:25.258118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.780 [2024-04-26 16:10:25.258653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.780 [2024-04-26 16:10:25.258933] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.780 [2024-04-26 16:10:25.258948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.780 [2024-04-26 16:10:25.258960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.780 [2024-04-26 16:10:25.263396] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.780 [2024-04-26 16:10:25.270664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.780 [2024-04-26 16:10:25.271337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.271690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.271739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.780 [2024-04-26 16:10:25.271749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.780 [2024-04-26 16:10:25.271941] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.780 [2024-04-26 16:10:25.272136] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.780 [2024-04-26 16:10:25.272148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.780 [2024-04-26 16:10:25.272156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.780 [2024-04-26 16:10:25.275119] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.780 [2024-04-26 16:10:25.283853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.780 [2024-04-26 16:10:25.284491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.780 [2024-04-26 16:10:25.284907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.284922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.284931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.285137] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.285328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.285339] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.781 [2024-04-26 16:10:25.285347] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.781 [2024-04-26 16:10:25.288263] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.781 [2024-04-26 16:10:25.296930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.781 [2024-04-26 16:10:25.297595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.298085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.298127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.298157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.298558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.298754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.298765] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.781 [2024-04-26 16:10:25.298774] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.781 [2024-04-26 16:10:25.301718] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.781 [2024-04-26 16:10:25.310000] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.781 [2024-04-26 16:10:25.310674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.311147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.311190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.311219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.311677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.311868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.311878] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.781 [2024-04-26 16:10:25.311887] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.781 [2024-04-26 16:10:25.314758] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.781 [2024-04-26 16:10:25.323235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.781 [2024-04-26 16:10:25.323875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.324221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.324236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.324248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.324430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.324610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.324621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.781 [2024-04-26 16:10:25.324628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.781 [2024-04-26 16:10:25.327463] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.781 [2024-04-26 16:10:25.336329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.781 [2024-04-26 16:10:25.336948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.337439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.337482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.337511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.338165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.338742] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.338753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.781 [2024-04-26 16:10:25.338762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.781 [2024-04-26 16:10:25.341642] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.781 [2024-04-26 16:10:25.349623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.781 [2024-04-26 16:10:25.350261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.350703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.350716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.350726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.350919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.351115] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.351126] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.781 [2024-04-26 16:10:25.351134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.781 [2024-04-26 16:10:25.354049] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.781 [2024-04-26 16:10:25.362909] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.781 [2024-04-26 16:10:25.363562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.363992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.364032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.364081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.364548] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.364739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.364750] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.781 [2024-04-26 16:10:25.364759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.781 [2024-04-26 16:10:25.367778] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.781 [2024-04-26 16:10:25.376110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.781 [2024-04-26 16:10:25.376769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.377199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.377245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.377275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.377870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.378051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.378062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.781 [2024-04-26 16:10:25.378074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.781 [2024-04-26 16:10:25.381049] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.781 [2024-04-26 16:10:25.389357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.781 [2024-04-26 16:10:25.390002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.390428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.781 [2024-04-26 16:10:25.390471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.781 [2024-04-26 16:10:25.390501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.781 [2024-04-26 16:10:25.391003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.781 [2024-04-26 16:10:25.391200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.781 [2024-04-26 16:10:25.391211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.782 [2024-04-26 16:10:25.391220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.782 [2024-04-26 16:10:25.394219] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.782 [2024-04-26 16:10:25.402713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.782 [2024-04-26 16:10:25.403363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.403815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.403856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.782 [2024-04-26 16:10:25.403887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.782 [2024-04-26 16:10:25.404183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.782 [2024-04-26 16:10:25.404375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.782 [2024-04-26 16:10:25.404386] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.782 [2024-04-26 16:10:25.404394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.782 [2024-04-26 16:10:25.407496] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.782 [2024-04-26 16:10:25.416094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.782 [2024-04-26 16:10:25.416685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.417157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.417200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.782 [2024-04-26 16:10:25.417230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.782 [2024-04-26 16:10:25.417755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.782 [2024-04-26 16:10:25.417945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.782 [2024-04-26 16:10:25.417956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.782 [2024-04-26 16:10:25.417964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.782 [2024-04-26 16:10:25.421065] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.782 [2024-04-26 16:10:25.429445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.782 [2024-04-26 16:10:25.430057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.430432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.430473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.782 [2024-04-26 16:10:25.430503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.782 [2024-04-26 16:10:25.430908] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.782 [2024-04-26 16:10:25.431105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.782 [2024-04-26 16:10:25.431117] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.782 [2024-04-26 16:10:25.431125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.782 [2024-04-26 16:10:25.435457] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.782 [2024-04-26 16:10:25.443355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.782 [2024-04-26 16:10:25.444048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.444460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.444503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.782 [2024-04-26 16:10:25.444533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.782 [2024-04-26 16:10:25.444826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.782 [2024-04-26 16:10:25.445017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.782 [2024-04-26 16:10:25.445028] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.782 [2024-04-26 16:10:25.445037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:45.782 [2024-04-26 16:10:25.448040] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.782 [2024-04-26 16:10:25.456711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:45.782 [2024-04-26 16:10:25.457435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.458000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:45.782 [2024-04-26 16:10:25.458016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:45.782 [2024-04-26 16:10:25.458027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:45.782 [2024-04-26 16:10:25.458232] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:45.782 [2024-04-26 16:10:25.458430] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:45.782 [2024-04-26 16:10:25.458442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:45.782 [2024-04-26 16:10:25.458451] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.042 [2024-04-26 16:10:25.461663] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.042 [2024-04-26 16:10:25.470159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.042 [2024-04-26 16:10:25.470745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.042 [2024-04-26 16:10:25.471162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.471208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.471241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.471590] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.043 [2024-04-26 16:10:25.471871] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.043 [2024-04-26 16:10:25.471887] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.043 [2024-04-26 16:10:25.471899] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.043 [2024-04-26 16:10:25.476341] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.043 [2024-04-26 16:10:25.484269] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.043 [2024-04-26 16:10:25.484903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.485338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.485356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.485366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.485567] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.043 [2024-04-26 16:10:25.485767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.043 [2024-04-26 16:10:25.485779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.043 [2024-04-26 16:10:25.485788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.043 [2024-04-26 16:10:25.488872] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.043 [2024-04-26 16:10:25.497569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.043 [2024-04-26 16:10:25.498237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.498632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.498647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.498657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.498849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.043 [2024-04-26 16:10:25.499040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.043 [2024-04-26 16:10:25.499051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.043 [2024-04-26 16:10:25.499059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.043 [2024-04-26 16:10:25.502043] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.043 [2024-04-26 16:10:25.510729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.043 [2024-04-26 16:10:25.511325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.511682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.511723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.511753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.512321] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.043 [2024-04-26 16:10:25.512602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.043 [2024-04-26 16:10:25.512617] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.043 [2024-04-26 16:10:25.512629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.043 [2024-04-26 16:10:25.517067] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.043 [2024-04-26 16:10:25.524580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.043 [2024-04-26 16:10:25.525239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.525600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.525641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.525671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.526202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.043 [2024-04-26 16:10:25.526394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.043 [2024-04-26 16:10:25.526407] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.043 [2024-04-26 16:10:25.526416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.043 [2024-04-26 16:10:25.529409] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.043 [2024-04-26 16:10:25.537732] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.043 [2024-04-26 16:10:25.538359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.538775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.538815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.538845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.539319] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.043 [2024-04-26 16:10:25.539511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.043 [2024-04-26 16:10:25.539521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.043 [2024-04-26 16:10:25.539529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.043 [2024-04-26 16:10:25.542553] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.043 [2024-04-26 16:10:25.550917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.043 [2024-04-26 16:10:25.551486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.551955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.551996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.552026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.552287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.043 [2024-04-26 16:10:25.552478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.043 [2024-04-26 16:10:25.552489] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.043 [2024-04-26 16:10:25.552498] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.043 [2024-04-26 16:10:25.556862] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.043 [2024-04-26 16:10:25.564852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.043 [2024-04-26 16:10:25.565456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.565965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.566004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.566034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.566254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.043 [2024-04-26 16:10:25.566445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.043 [2024-04-26 16:10:25.566456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.043 [2024-04-26 16:10:25.566467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.043 [2024-04-26 16:10:25.569434] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.043 [2024-04-26 16:10:25.578087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.043 [2024-04-26 16:10:25.578599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.578979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.043 [2024-04-26 16:10:25.579019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.043 [2024-04-26 16:10:25.579049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.043 [2024-04-26 16:10:25.579487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.579679] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.579690] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.579698] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.582620] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.591235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.591847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.592232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.592248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.044 [2024-04-26 16:10:25.592258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.044 [2024-04-26 16:10:25.592457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.592638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.592648] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.592656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.595617] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.604452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.605043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.605466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.605514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.044 [2024-04-26 16:10:25.605524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.044 [2024-04-26 16:10:25.605716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.605908] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.605919] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.605931] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.608918] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.617881] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.618570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.618918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.618932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.044 [2024-04-26 16:10:25.618942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.044 [2024-04-26 16:10:25.619147] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.619346] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.619357] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.619366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.622457] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.631217] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.631807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.632192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.632206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.044 [2024-04-26 16:10:25.632216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.044 [2024-04-26 16:10:25.632414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.632610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.632621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.632630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.635722] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.644749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.645354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.645700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.645714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.044 [2024-04-26 16:10:25.645724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.044 [2024-04-26 16:10:25.645927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.646134] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.646146] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.646156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.649352] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.658336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.658967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.659413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.659456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.044 [2024-04-26 16:10:25.659486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.044 [2024-04-26 16:10:25.660138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.660383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.660394] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.660403] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.663599] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.671708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.672301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.672708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.672749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.044 [2024-04-26 16:10:25.672779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.044 [2024-04-26 16:10:25.673435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.673831] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.673843] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.673851] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.678078] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.685763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.686398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.686869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.686908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.044 [2024-04-26 16:10:25.686938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.044 [2024-04-26 16:10:25.687408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.044 [2024-04-26 16:10:25.687605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.044 [2024-04-26 16:10:25.687616] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.044 [2024-04-26 16:10:25.687624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.044 [2024-04-26 16:10:25.690722] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.044 [2024-04-26 16:10:25.699062] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.044 [2024-04-26 16:10:25.699640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.044 [2024-04-26 16:10:25.700059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.045 [2024-04-26 16:10:25.700115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.045 [2024-04-26 16:10:25.700145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.045 [2024-04-26 16:10:25.700705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.045 [2024-04-26 16:10:25.700896] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.045 [2024-04-26 16:10:25.700907] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.045 [2024-04-26 16:10:25.700915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.045 [2024-04-26 16:10:25.703898] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.045 [2024-04-26 16:10:25.712356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.045 [2024-04-26 16:10:25.713053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.045 [2024-04-26 16:10:25.713406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.045 [2024-04-26 16:10:25.713420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.045 [2024-04-26 16:10:25.713430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.045 [2024-04-26 16:10:25.713622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.045 [2024-04-26 16:10:25.713812] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.045 [2024-04-26 16:10:25.713823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.045 [2024-04-26 16:10:25.713831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.045 [2024-04-26 16:10:25.716822] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.305 [2024-04-26 16:10:25.725727] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.305 [2024-04-26 16:10:25.726357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.726720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.726763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.305 [2024-04-26 16:10:25.726795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.305 [2024-04-26 16:10:25.727454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.305 [2024-04-26 16:10:25.727921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.305 [2024-04-26 16:10:25.727932] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.305 [2024-04-26 16:10:25.727940] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.305 [2024-04-26 16:10:25.731085] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.305 [2024-04-26 16:10:25.739012] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.305 [2024-04-26 16:10:25.739657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.740135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.740179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.305 [2024-04-26 16:10:25.740211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.305 [2024-04-26 16:10:25.740842] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.305 [2024-04-26 16:10:25.741039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.305 [2024-04-26 16:10:25.741050] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.305 [2024-04-26 16:10:25.741059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.305 [2024-04-26 16:10:25.744011] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.305 [2024-04-26 16:10:25.752392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.305 [2024-04-26 16:10:25.753022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.753400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.753443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.305 [2024-04-26 16:10:25.753473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.305 [2024-04-26 16:10:25.754019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.305 [2024-04-26 16:10:25.754220] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.305 [2024-04-26 16:10:25.754231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.305 [2024-04-26 16:10:25.754240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.305 [2024-04-26 16:10:25.757244] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.305 [2024-04-26 16:10:25.765577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.305 [2024-04-26 16:10:25.766216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.766644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.766685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.305 [2024-04-26 16:10:25.766715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.305 [2024-04-26 16:10:25.767374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.305 [2024-04-26 16:10:25.767863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.305 [2024-04-26 16:10:25.767881] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.305 [2024-04-26 16:10:25.767889] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.305 [2024-04-26 16:10:25.770875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.305 [2024-04-26 16:10:25.778836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.305 [2024-04-26 16:10:25.779452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.779877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.779917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.305 [2024-04-26 16:10:25.779947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.305 [2024-04-26 16:10:25.780568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.305 [2024-04-26 16:10:25.780760] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.305 [2024-04-26 16:10:25.780771] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.305 [2024-04-26 16:10:25.780779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.305 [2024-04-26 16:10:25.783755] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.305 [2024-04-26 16:10:25.791994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.305 [2024-04-26 16:10:25.792570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.792978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.305 [2024-04-26 16:10:25.793018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.305 [2024-04-26 16:10:25.793048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.305 [2024-04-26 16:10:25.793702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.305 [2024-04-26 16:10:25.794082] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.305 [2024-04-26 16:10:25.794094] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.305 [2024-04-26 16:10:25.794102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.305 [2024-04-26 16:10:25.797100] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.305 [2024-04-26 16:10:25.805251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.805752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.806143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.806186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.806216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.806708] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.806898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.806909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.806917] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.809841] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.818651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.819257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.819561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.819576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.819586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.819778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.819968] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.819980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.819988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.822972] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.831984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.832554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.832845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.832859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.832868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.833061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.833258] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.833269] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.833278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.836277] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.845246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.845833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.846234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.846277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.846307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.846949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.847217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.847229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.847237] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.850221] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.858528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.859168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.859594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.859644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.859674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.860120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.860311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.860322] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.860331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.863249] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.871679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.872317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.872747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.872787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.872817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.873053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.873247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.873258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.873266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.876202] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.884786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.885430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.885871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.885885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.885895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.886093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.886284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.886295] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.886304] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.889222] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.897896] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.898552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.899050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.899107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.899154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.899347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.899539] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.899549] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.899558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.902531] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.911117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.911798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.912299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.912343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.912373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.913026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.913227] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.913238] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.913247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.916352] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.924404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.925051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.925559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.925601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.925630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.926260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.926540] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.926555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.926567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.930998] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.938390] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.939024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.939461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.939503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.939533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.940220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.940417] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.940428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.940437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.943475] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.951529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.952149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.952587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.952626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.952656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.953143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.953334] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.953345] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.953353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.956275] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.964608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.965251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.965758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.965800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.965829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.966260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.966451] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.966462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.966470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.969388] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.306 [2024-04-26 16:10:25.977769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.306 [2024-04-26 16:10:25.978444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.978870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.306 [2024-04-26 16:10:25.978910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.306 [2024-04-26 16:10:25.978940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.306 [2024-04-26 16:10:25.979200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.306 [2024-04-26 16:10:25.979391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.306 [2024-04-26 16:10:25.979402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.306 [2024-04-26 16:10:25.979411] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.306 [2024-04-26 16:10:25.982445] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.567 [2024-04-26 16:10:25.991184] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.567 [2024-04-26 16:10:25.991843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:25.992285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:25.992300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.567 [2024-04-26 16:10:25.992312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.567 [2024-04-26 16:10:25.992512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.567 [2024-04-26 16:10:25.992709] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.567 [2024-04-26 16:10:25.992720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.567 [2024-04-26 16:10:25.992729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.567 [2024-04-26 16:10:25.995675] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.567 [2024-04-26 16:10:26.004292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.567 [2024-04-26 16:10:26.004912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.005410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.005454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.567 [2024-04-26 16:10:26.005485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.567 [2024-04-26 16:10:26.006140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.567 [2024-04-26 16:10:26.006524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.567 [2024-04-26 16:10:26.006535] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.567 [2024-04-26 16:10:26.006544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.567 [2024-04-26 16:10:26.009503] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.567 [2024-04-26 16:10:26.017445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.567 [2024-04-26 16:10:26.018096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.018601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.018643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.567 [2024-04-26 16:10:26.018674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.567 [2024-04-26 16:10:26.019207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.567 [2024-04-26 16:10:26.019401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.567 [2024-04-26 16:10:26.019412] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.567 [2024-04-26 16:10:26.019421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.567 [2024-04-26 16:10:26.022339] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.567 [2024-04-26 16:10:26.030500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.567 [2024-04-26 16:10:26.031125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.031627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.031668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.567 [2024-04-26 16:10:26.031698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.567 [2024-04-26 16:10:26.032218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.567 [2024-04-26 16:10:26.032410] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.567 [2024-04-26 16:10:26.032420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.567 [2024-04-26 16:10:26.032429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.567 [2024-04-26 16:10:26.035343] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.567 [2024-04-26 16:10:26.043825] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.567 [2024-04-26 16:10:26.044452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.044864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.044904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.567 [2024-04-26 16:10:26.044934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.567 [2024-04-26 16:10:26.045461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.567 [2024-04-26 16:10:26.045653] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.567 [2024-04-26 16:10:26.045663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.567 [2024-04-26 16:10:26.045672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.567 [2024-04-26 16:10:26.048703] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.567 [2024-04-26 16:10:26.056949] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.567 [2024-04-26 16:10:26.057594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.058031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.567 [2024-04-26 16:10:26.058084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.568 [2024-04-26 16:10:26.058116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.568 [2024-04-26 16:10:26.058619] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.568 [2024-04-26 16:10:26.058809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.568 [2024-04-26 16:10:26.058823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.568 [2024-04-26 16:10:26.058832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.568 [2024-04-26 16:10:26.061750] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.568 [2024-04-26 16:10:26.070059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.568 [2024-04-26 16:10:26.070704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.071205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.071248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.568 [2024-04-26 16:10:26.071278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.568 [2024-04-26 16:10:26.071729] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.568 [2024-04-26 16:10:26.071925] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.568 [2024-04-26 16:10:26.071936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.568 [2024-04-26 16:10:26.071945] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.568 [2024-04-26 16:10:26.074979] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.568 [2024-04-26 16:10:26.083240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.568 [2024-04-26 16:10:26.083869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.084353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.084389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.568 [2024-04-26 16:10:26.084398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.568 [2024-04-26 16:10:26.084592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.568 [2024-04-26 16:10:26.084783] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.568 [2024-04-26 16:10:26.084794] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.568 [2024-04-26 16:10:26.084802] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.568 [2024-04-26 16:10:26.087721] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.568 [2024-04-26 16:10:26.096373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.568 [2024-04-26 16:10:26.096967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.097387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.097430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.568 [2024-04-26 16:10:26.097460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.568 [2024-04-26 16:10:26.098113] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.568 [2024-04-26 16:10:26.098618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.568 [2024-04-26 16:10:26.098632] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.568 [2024-04-26 16:10:26.098641] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.568 [2024-04-26 16:10:26.102977] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.568 [2024-04-26 16:10:26.110559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.568 [2024-04-26 16:10:26.111207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.111705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.111760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.568 [2024-04-26 16:10:26.111790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.568 [2024-04-26 16:10:26.112450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.568 [2024-04-26 16:10:26.112857] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.568 [2024-04-26 16:10:26.112867] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.568 [2024-04-26 16:10:26.112876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.568 [2024-04-26 16:10:26.115842] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.568 [2024-04-26 16:10:26.123705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.568 [2024-04-26 16:10:26.124287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.124707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.124746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.568 [2024-04-26 16:10:26.124775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.568 [2024-04-26 16:10:26.125434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.568 [2024-04-26 16:10:26.125722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.568 [2024-04-26 16:10:26.125733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.568 [2024-04-26 16:10:26.125741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.568 [2024-04-26 16:10:26.128655] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.568 [2024-04-26 16:10:26.136811] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.568 [2024-04-26 16:10:26.137454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.137930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.137970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.568 [2024-04-26 16:10:26.137999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.568 [2024-04-26 16:10:26.138485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.568 [2024-04-26 16:10:26.138676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.568 [2024-04-26 16:10:26.138687] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.568 [2024-04-26 16:10:26.138698] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.568 [2024-04-26 16:10:26.142961] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.568 [2024-04-26 16:10:26.150788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.568 [2024-04-26 16:10:26.151435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.151933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.568 [2024-04-26 16:10:26.151974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.568 [2024-04-26 16:10:26.152004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.568 [2024-04-26 16:10:26.152659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.568 [2024-04-26 16:10:26.153026] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.568 [2024-04-26 16:10:26.153037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.568 [2024-04-26 16:10:26.153045] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.568 [2024-04-26 16:10:26.156006] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.569 [2024-04-26 16:10:26.163959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.569 [2024-04-26 16:10:26.164592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.165029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.165082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.569 [2024-04-26 16:10:26.165114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.569 [2024-04-26 16:10:26.165753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.569 [2024-04-26 16:10:26.165950] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.569 [2024-04-26 16:10:26.165961] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.569 [2024-04-26 16:10:26.165970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.569 [2024-04-26 16:10:26.169020] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.569 [2024-04-26 16:10:26.177194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.569 [2024-04-26 16:10:26.177862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.178390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.178434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.569 [2024-04-26 16:10:26.178476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.569 [2024-04-26 16:10:26.178673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.569 [2024-04-26 16:10:26.178863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.569 [2024-04-26 16:10:26.178874] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.569 [2024-04-26 16:10:26.178882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.569 [2024-04-26 16:10:26.181884] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.569 [2024-04-26 16:10:26.190419] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.569 [2024-04-26 16:10:26.191058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.191568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.191613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.569 [2024-04-26 16:10:26.191623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.569 [2024-04-26 16:10:26.191815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.569 [2024-04-26 16:10:26.192006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.569 [2024-04-26 16:10:26.192017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.569 [2024-04-26 16:10:26.192025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.569 [2024-04-26 16:10:26.195089] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.569 [2024-04-26 16:10:26.203676] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.569 [2024-04-26 16:10:26.204254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.204752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.204792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.569 [2024-04-26 16:10:26.204823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.569 [2024-04-26 16:10:26.205481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.569 [2024-04-26 16:10:26.205936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.569 [2024-04-26 16:10:26.205947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.569 [2024-04-26 16:10:26.205956] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.569 [2024-04-26 16:10:26.208873] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.569 [2024-04-26 16:10:26.216839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.569 [2024-04-26 16:10:26.217486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.217961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.218003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.569 [2024-04-26 16:10:26.218032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.569 [2024-04-26 16:10:26.218688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.569 [2024-04-26 16:10:26.219132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.569 [2024-04-26 16:10:26.219144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.569 [2024-04-26 16:10:26.219152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.569 [2024-04-26 16:10:26.222181] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.569 [2024-04-26 16:10:26.229957] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.569 [2024-04-26 16:10:26.230603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.231104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.231147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.569 [2024-04-26 16:10:26.231176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.569 [2024-04-26 16:10:26.231656] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.569 [2024-04-26 16:10:26.231847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.569 [2024-04-26 16:10:26.231858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.569 [2024-04-26 16:10:26.231866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.569 [2024-04-26 16:10:26.234782] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.569 [2024-04-26 16:10:26.243216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.569 [2024-04-26 16:10:26.243866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.244298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.569 [2024-04-26 16:10:26.244315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.569 [2024-04-26 16:10:26.244326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.569 [2024-04-26 16:10:26.244546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.569 [2024-04-26 16:10:26.244743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.569 [2024-04-26 16:10:26.244754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.569 [2024-04-26 16:10:26.244763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.569 [2024-04-26 16:10:26.247932] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.830 [2024-04-26 16:10:26.256718] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.830 [2024-04-26 16:10:26.257345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.257771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.257815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.830 [2024-04-26 16:10:26.257846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.830 [2024-04-26 16:10:26.258507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.830 [2024-04-26 16:10:26.259016] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.830 [2024-04-26 16:10:26.259027] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.830 [2024-04-26 16:10:26.259035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.830 [2024-04-26 16:10:26.262007] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.830 [2024-04-26 16:10:26.269901] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.830 [2024-04-26 16:10:26.270567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.271092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.271138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.830 [2024-04-26 16:10:26.271169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.830 [2024-04-26 16:10:26.271811] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.830 [2024-04-26 16:10:26.272252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.830 [2024-04-26 16:10:26.272263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.830 [2024-04-26 16:10:26.272272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.830 [2024-04-26 16:10:26.275213] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.830 [2024-04-26 16:10:26.283086] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.830 [2024-04-26 16:10:26.283734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.284207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.284251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.830 [2024-04-26 16:10:26.284282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.830 [2024-04-26 16:10:26.284924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.830 [2024-04-26 16:10:26.285158] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.830 [2024-04-26 16:10:26.285169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.830 [2024-04-26 16:10:26.285177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.830 [2024-04-26 16:10:26.288092] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.830 [2024-04-26 16:10:26.296207] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.830 [2024-04-26 16:10:26.296829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.297264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.297308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.830 [2024-04-26 16:10:26.297338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.830 [2024-04-26 16:10:26.297980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.830 [2024-04-26 16:10:26.298190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.830 [2024-04-26 16:10:26.298202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.830 [2024-04-26 16:10:26.298210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.830 [2024-04-26 16:10:26.301132] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.830 [2024-04-26 16:10:26.309521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.830 [2024-04-26 16:10:26.310174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.310674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.310714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.830 [2024-04-26 16:10:26.310744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.830 [2024-04-26 16:10:26.311077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.830 [2024-04-26 16:10:26.311359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.830 [2024-04-26 16:10:26.311374] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.830 [2024-04-26 16:10:26.311386] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.830 [2024-04-26 16:10:26.315811] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.830 [2024-04-26 16:10:26.323307] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.830 [2024-04-26 16:10:26.323961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.324346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.324389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.830 [2024-04-26 16:10:26.324418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.830 [2024-04-26 16:10:26.325061] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.830 [2024-04-26 16:10:26.325315] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.830 [2024-04-26 16:10:26.325326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.830 [2024-04-26 16:10:26.325334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.830 [2024-04-26 16:10:26.328333] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.830 [2024-04-26 16:10:26.336530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.830 [2024-04-26 16:10:26.337180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.337529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.830 [2024-04-26 16:10:26.337570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.830 [2024-04-26 16:10:26.337600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.830 [2024-04-26 16:10:26.338224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.830 [2024-04-26 16:10:26.338415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.830 [2024-04-26 16:10:26.338426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.830 [2024-04-26 16:10:26.338434] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.830 [2024-04-26 16:10:26.341378] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.830 [2024-04-26 16:10:26.349748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.830 [2024-04-26 16:10:26.350522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.351032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.351081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.831 [2024-04-26 16:10:26.351113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.831 [2024-04-26 16:10:26.351755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.831 [2024-04-26 16:10:26.352291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.831 [2024-04-26 16:10:26.352307] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.831 [2024-04-26 16:10:26.352319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2598255 Killed "${NVMF_APP[@]}" "$@" 00:27:46.831 16:10:26 -- host/bdevperf.sh@36 -- # tgt_init 00:27:46.831 [2024-04-26 16:10:26.356748] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.831 16:10:26 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:46.831 16:10:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:46.831 16:10:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:46.831 16:10:26 -- common/autotest_common.sh@10 -- # set +x 00:27:46.831 [2024-04-26 16:10:26.363603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.831 [2024-04-26 16:10:26.364247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 16:10:26 -- nvmf/common.sh@470 -- # nvmfpid=2599893 00:27:46.831 [2024-04-26 16:10:26.364661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.364676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.831 [2024-04-26 16:10:26.364686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.831 16:10:26 -- nvmf/common.sh@471 -- # waitforlisten 2599893 00:27:46.831 16:10:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:46.831 [2024-04-26 16:10:26.364884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.831 [2024-04-26 16:10:26.365085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.831 [2024-04-26 16:10:26.365097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.831 [2024-04-26 16:10:26.365106] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.831 16:10:26 -- common/autotest_common.sh@817 -- # '[' -z 2599893 ']' 00:27:46.831 16:10:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.831 16:10:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:46.831 16:10:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.831 16:10:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:46.831 16:10:26 -- common/autotest_common.sh@10 -- # set +x 00:27:46.831 [2024-04-26 16:10:26.368195] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.831 [2024-04-26 16:10:26.376966] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.831 [2024-04-26 16:10:26.377628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.378056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.378074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.831 [2024-04-26 16:10:26.378088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.831 [2024-04-26 16:10:26.378286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.831 [2024-04-26 16:10:26.378482] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.831 [2024-04-26 16:10:26.378493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.831 [2024-04-26 16:10:26.378502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.831 [2024-04-26 16:10:26.381596] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.831 [2024-04-26 16:10:26.390361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.831 [2024-04-26 16:10:26.390961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.391374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.391389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.831 [2024-04-26 16:10:26.391399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.831 [2024-04-26 16:10:26.391598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.831 [2024-04-26 16:10:26.391795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.831 [2024-04-26 16:10:26.391805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.831 [2024-04-26 16:10:26.391814] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.831 [2024-04-26 16:10:26.394905] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.831 [2024-04-26 16:10:26.403877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.831 [2024-04-26 16:10:26.404566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.404993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.405006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.831 [2024-04-26 16:10:26.405016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.831 [2024-04-26 16:10:26.405224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.831 [2024-04-26 16:10:26.405422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.831 [2024-04-26 16:10:26.405433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.831 [2024-04-26 16:10:26.405442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.831 [2024-04-26 16:10:26.408517] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.831 [2024-04-26 16:10:26.417246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.831 [2024-04-26 16:10:26.417966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.418426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.418442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.831 [2024-04-26 16:10:26.418456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.831 [2024-04-26 16:10:26.418660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.831 [2024-04-26 16:10:26.418859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.831 [2024-04-26 16:10:26.418871] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.831 [2024-04-26 16:10:26.418879] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.831 [2024-04-26 16:10:26.421992] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.831 [2024-04-26 16:10:26.430695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.831 [2024-04-26 16:10:26.431374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.431722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.431736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.831 [2024-04-26 16:10:26.431747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.831 [2024-04-26 16:10:26.431950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.831 [2024-04-26 16:10:26.432156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.831 [2024-04-26 16:10:26.432168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.831 [2024-04-26 16:10:26.432177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.831 [2024-04-26 16:10:26.435292] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.831 [2024-04-26 16:10:26.441724] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:46.831 [2024-04-26 16:10:26.441810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.831 [2024-04-26 16:10:26.444093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.831 [2024-04-26 16:10:26.444723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.831 [2024-04-26 16:10:26.445127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.445143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.832 [2024-04-26 16:10:26.445154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.832 [2024-04-26 16:10:26.445365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.832 [2024-04-26 16:10:26.445558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.832 [2024-04-26 16:10:26.445569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.832 [2024-04-26 16:10:26.445578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.832 [2024-04-26 16:10:26.448697] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.832 [2024-04-26 16:10:26.457470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.832 [2024-04-26 16:10:26.458083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.458384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.458399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.832 [2024-04-26 16:10:26.458409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.832 [2024-04-26 16:10:26.458613] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.832 [2024-04-26 16:10:26.458813] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.832 [2024-04-26 16:10:26.458824] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.832 [2024-04-26 16:10:26.458833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.832 [2024-04-26 16:10:26.461907] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.832 [2024-04-26 16:10:26.470936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.832 [2024-04-26 16:10:26.471598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.471944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.471958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.832 [2024-04-26 16:10:26.471969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.832 [2024-04-26 16:10:26.472178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.832 [2024-04-26 16:10:26.472381] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.832 [2024-04-26 16:10:26.472393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.832 [2024-04-26 16:10:26.472402] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.832 [2024-04-26 16:10:26.475521] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.832 [2024-04-26 16:10:26.484365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.832 [2024-04-26 16:10:26.485048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.485416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.485431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.832 [2024-04-26 16:10:26.485443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.832 [2024-04-26 16:10:26.485645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.832 [2024-04-26 16:10:26.485844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.832 [2024-04-26 16:10:26.485855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.832 [2024-04-26 16:10:26.485864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.832 [2024-04-26 16:10:26.488978] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.832 [2024-04-26 16:10:26.497845] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:46.832 [2024-04-26 16:10:26.498299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.498719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:46.832 [2024-04-26 16:10:26.498737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:46.832 [2024-04-26 16:10:26.498747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:46.832 [2024-04-26 16:10:26.498950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:46.832 [2024-04-26 16:10:26.499156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:46.832 [2024-04-26 16:10:26.499167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:46.832 [2024-04-26 16:10:26.499176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:46.832 [2024-04-26 16:10:26.502300] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:46.832 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.832 [2024-04-26 16:10:26.511395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.092 [2024-04-26 16:10:26.512058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.512466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.512481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.092 [2024-04-26 16:10:26.512494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.092 [2024-04-26 16:10:26.512697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.092 [2024-04-26 16:10:26.512898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.092 [2024-04-26 16:10:26.512910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.092 [2024-04-26 16:10:26.512919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.092 [2024-04-26 16:10:26.516037] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.092 [2024-04-26 16:10:26.524913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.092 [2024-04-26 16:10:26.525518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.525916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.525930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.092 [2024-04-26 16:10:26.525941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.092 [2024-04-26 16:10:26.526151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.092 [2024-04-26 16:10:26.526352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.092 [2024-04-26 16:10:26.526363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.092 [2024-04-26 16:10:26.526373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.092 [2024-04-26 16:10:26.529507] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.092 [2024-04-26 16:10:26.538292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.092 [2024-04-26 16:10:26.538952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.539193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.539211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.092 [2024-04-26 16:10:26.539225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.092 [2024-04-26 16:10:26.539431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.092 [2024-04-26 16:10:26.539631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.092 [2024-04-26 16:10:26.539643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.092 [2024-04-26 16:10:26.539652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.092 [2024-04-26 16:10:26.542778] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.092 [2024-04-26 16:10:26.551784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.092 [2024-04-26 16:10:26.552437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.552832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.552846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.092 [2024-04-26 16:10:26.552856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.092 [2024-04-26 16:10:26.553059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.092 [2024-04-26 16:10:26.553261] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.092 [2024-04-26 16:10:26.553273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.092 [2024-04-26 16:10:26.553281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.092 [2024-04-26 16:10:26.556022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:47.092 [2024-04-26 16:10:26.556437] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.092 [2024-04-26 16:10:26.565175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.092 [2024-04-26 16:10:26.565845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.566243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.566258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.092 [2024-04-26 16:10:26.566269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.092 [2024-04-26 16:10:26.566472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.092 [2024-04-26 16:10:26.566674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.092 [2024-04-26 16:10:26.566685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.092 [2024-04-26 16:10:26.566694] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.092 [2024-04-26 16:10:26.569833] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.092 [2024-04-26 16:10:26.578427] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.092 [2024-04-26 16:10:26.579054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.579417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.579432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.092 [2024-04-26 16:10:26.579447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.092 [2024-04-26 16:10:26.579643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.092 [2024-04-26 16:10:26.579836] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.092 [2024-04-26 16:10:26.579846] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.092 [2024-04-26 16:10:26.579855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.092 [2024-04-26 16:10:26.582934] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.092 [2024-04-26 16:10:26.591706] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.092 [2024-04-26 16:10:26.592361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.592780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.092 [2024-04-26 16:10:26.592794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.092 [2024-04-26 16:10:26.592804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.092 [2024-04-26 16:10:26.593001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.593221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.593233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.593242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.596302] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.605083] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.605731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.606146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.606161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.606172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.606375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.606574] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.606586] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.606594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.609641] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.618435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.619138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.619555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.619569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.619582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.619781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.619974] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.620012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.620020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.623106] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.631702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.632352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.632572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.632586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.632597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.632797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.632998] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.633009] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.633018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.636053] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.645040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.645640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.645937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.645951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.645961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.646162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.646355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.646366] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.646375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.649383] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.658317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.658958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.659372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.659389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.659399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.659599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.659792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.659802] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.659811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.662819] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.671594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.672209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.672558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.672572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.672582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.672784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.672982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.672993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.673002] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.676107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.685123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.685773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.686133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.686150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.686161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.686369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.686564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.686575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.686583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.689722] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.698518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.699147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.699566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.699580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.699590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.699786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.699984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.699995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.700004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.703077] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.711881] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.712536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.712928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.712941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.712951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.713172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.713373] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.713384] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.713392] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.716442] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.093 [2024-04-26 16:10:26.725218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.093 [2024-04-26 16:10:26.725677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.726098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.093 [2024-04-26 16:10:26.726129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.093 [2024-04-26 16:10:26.726140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.093 [2024-04-26 16:10:26.726348] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.093 [2024-04-26 16:10:26.726542] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.093 [2024-04-26 16:10:26.726553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.093 [2024-04-26 16:10:26.726561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.093 [2024-04-26 16:10:26.729607] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.094 [2024-04-26 16:10:26.738583] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.094 [2024-04-26 16:10:26.739271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.094 [2024-04-26 16:10:26.739629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.094 [2024-04-26 16:10:26.739643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.094 [2024-04-26 16:10:26.739652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.094 [2024-04-26 16:10:26.739847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.094 [2024-04-26 16:10:26.740044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.094 [2024-04-26 16:10:26.740055] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.094 [2024-04-26 16:10:26.740063] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.094 [2024-04-26 16:10:26.743161] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.094 [2024-04-26 16:10:26.751891] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.094 [2024-04-26 16:10:26.752539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.094 [2024-04-26 16:10:26.752970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.094 [2024-04-26 16:10:26.752984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.094 [2024-04-26 16:10:26.752994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.094 [2024-04-26 16:10:26.753200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.094 [2024-04-26 16:10:26.753399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.094 [2024-04-26 16:10:26.753410] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.094 [2024-04-26 16:10:26.753419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.094 [2024-04-26 16:10:26.756464] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.094 [2024-04-26 16:10:26.765216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.094 [2024-04-26 16:10:26.765836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.094 [2024-04-26 16:10:26.766231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.094 [2024-04-26 16:10:26.766247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.094 [2024-04-26 16:10:26.766256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.094 [2024-04-26 16:10:26.766456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.094 [2024-04-26 16:10:26.766653] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.094 [2024-04-26 16:10:26.766664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.094 [2024-04-26 16:10:26.766673] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.094 [2024-04-26 16:10:26.769783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.354 [2024-04-26 16:10:26.778632] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.354 [2024-04-26 16:10:26.779334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.354 [2024-04-26 16:10:26.779683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.354 [2024-04-26 16:10:26.779698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.354 [2024-04-26 16:10:26.779709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.354 [2024-04-26 16:10:26.779912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.354 [2024-04-26 16:10:26.780117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.354 [2024-04-26 16:10:26.780133] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.354 [2024-04-26 16:10:26.780142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.354 [2024-04-26 16:10:26.783202] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.354 [2024-04-26 16:10:26.785169] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.354 [2024-04-26 16:10:26.785200] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.354 [2024-04-26 16:10:26.785209] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.354 [2024-04-26 16:10:26.785219] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.354 [2024-04-26 16:10:26.785247] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.354 [2024-04-26 16:10:26.785311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.354 [2024-04-26 16:10:26.785411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.354 [2024-04-26 16:10:26.785420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.354 [2024-04-26 16:10:26.792046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.354 [2024-04-26 16:10:26.792673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.354 [2024-04-26 16:10:26.793000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.354 [2024-04-26 16:10:26.793014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.354 [2024-04-26 16:10:26.793026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.354 [2024-04-26 16:10:26.793248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.354 [2024-04-26 16:10:26.793449] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.354 [2024-04-26 16:10:26.793461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.354 [2024-04-26 16:10:26.793470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.354 [2024-04-26 16:10:26.796601] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.354 [2024-04-26 16:10:26.805451] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.354 [2024-04-26 16:10:26.806117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.354 [2024-04-26 16:10:26.806540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.354 [2024-04-26 16:10:26.806554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.354 [2024-04-26 16:10:26.806565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.354 [2024-04-26 16:10:26.806770] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.354 [2024-04-26 16:10:26.806971] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.354 [2024-04-26 16:10:26.806983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.354 [2024-04-26 16:10:26.806993] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.354 [2024-04-26 16:10:26.810122] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.354 [2024-04-26 16:10:26.818940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.354 [2024-04-26 16:10:26.819602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.820022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.820037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.820047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.820260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.820460] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.355 [2024-04-26 16:10:26.820472] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.355 [2024-04-26 16:10:26.820481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.355 [2024-04-26 16:10:26.823590] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.355 [2024-04-26 16:10:26.832410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.355 [2024-04-26 16:10:26.833045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.833467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.833482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.833493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.833696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.833895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.355 [2024-04-26 16:10:26.833907] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.355 [2024-04-26 16:10:26.833916] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.355 [2024-04-26 16:10:26.837027] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.355 [2024-04-26 16:10:26.845856] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.355 [2024-04-26 16:10:26.846426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.846842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.846856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.846867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.847068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.847275] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.355 [2024-04-26 16:10:26.847286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.355 [2024-04-26 16:10:26.847295] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.355 [2024-04-26 16:10:26.850400] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.355 [2024-04-26 16:10:26.859379] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.355 [2024-04-26 16:10:26.860003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.860255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.860270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.860280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.860482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.860680] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.355 [2024-04-26 16:10:26.860691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.355 [2024-04-26 16:10:26.860699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.355 [2024-04-26 16:10:26.863807] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.355 [2024-04-26 16:10:26.872788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.355 [2024-04-26 16:10:26.873370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.873769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.873783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.873794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.873997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.874199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.355 [2024-04-26 16:10:26.874211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.355 [2024-04-26 16:10:26.874219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.355 [2024-04-26 16:10:26.877372] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.355 [2024-04-26 16:10:26.886219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.355 [2024-04-26 16:10:26.886830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.887129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.887145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.887156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.887360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.887562] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.355 [2024-04-26 16:10:26.887574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.355 [2024-04-26 16:10:26.887584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.355 [2024-04-26 16:10:26.890708] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.355 [2024-04-26 16:10:26.899763] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.355 [2024-04-26 16:10:26.900288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.900661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.900680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.900691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.900894] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.901098] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.355 [2024-04-26 16:10:26.901110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.355 [2024-04-26 16:10:26.901119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.355 [2024-04-26 16:10:26.904256] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.355 [2024-04-26 16:10:26.913275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.355 [2024-04-26 16:10:26.913861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.914217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.914233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.914243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.914447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.914647] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.355 [2024-04-26 16:10:26.914658] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.355 [2024-04-26 16:10:26.914666] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.355 [2024-04-26 16:10:26.917788] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.355 [2024-04-26 16:10:26.926616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.355 [2024-04-26 16:10:26.927166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.927610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.355 [2024-04-26 16:10:26.927624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.355 [2024-04-26 16:10:26.927634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.355 [2024-04-26 16:10:26.927835] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.355 [2024-04-26 16:10:26.928034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:26.928045] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:26.928054] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.356 [2024-04-26 16:10:26.931178] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.356 [2024-04-26 16:10:26.940005] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.356 [2024-04-26 16:10:26.940612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.940907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.940921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.356 [2024-04-26 16:10:26.940933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.356 [2024-04-26 16:10:26.941149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.356 [2024-04-26 16:10:26.941348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:26.941360] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:26.941369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.356 [2024-04-26 16:10:26.944475] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.356 [2024-04-26 16:10:26.953466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.356 [2024-04-26 16:10:26.954061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.954352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.954366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.356 [2024-04-26 16:10:26.954376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.356 [2024-04-26 16:10:26.954576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.356 [2024-04-26 16:10:26.954774] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:26.954785] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:26.954793] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.356 [2024-04-26 16:10:26.957902] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.356 [2024-04-26 16:10:26.966878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.356 [2024-04-26 16:10:26.967441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.967721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.967743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.356 [2024-04-26 16:10:26.967753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.356 [2024-04-26 16:10:26.967952] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.356 [2024-04-26 16:10:26.968156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:26.968167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:26.968176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.356 [2024-04-26 16:10:26.971277] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.356 [2024-04-26 16:10:26.980251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.356 [2024-04-26 16:10:26.980877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.981182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.981197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.356 [2024-04-26 16:10:26.981210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.356 [2024-04-26 16:10:26.981408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.356 [2024-04-26 16:10:26.981605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:26.981617] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:26.981625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.356 [2024-04-26 16:10:26.984723] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.356 [2024-04-26 16:10:26.993702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.356 [2024-04-26 16:10:26.994218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.994562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:26.994577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.356 [2024-04-26 16:10:26.994587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.356 [2024-04-26 16:10:26.994786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.356 [2024-04-26 16:10:26.994983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:26.994995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:26.995003] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.356 [2024-04-26 16:10:26.998110] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.356 [2024-04-26 16:10:27.007126] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.356 [2024-04-26 16:10:27.007773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:27.008155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:27.008172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.356 [2024-04-26 16:10:27.008182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.356 [2024-04-26 16:10:27.008384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.356 [2024-04-26 16:10:27.008583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:27.008595] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:27.008603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.356 [2024-04-26 16:10:27.011705] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.356 [2024-04-26 16:10:27.020502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.356 [2024-04-26 16:10:27.021077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:27.021422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:27.021437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.356 [2024-04-26 16:10:27.021447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.356 [2024-04-26 16:10:27.021649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.356 [2024-04-26 16:10:27.021848] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:27.021859] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:27.021868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.356 [2024-04-26 16:10:27.024971] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.356 [2024-04-26 16:10:27.034002] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.356 [2024-04-26 16:10:27.034576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:27.035003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.356 [2024-04-26 16:10:27.035020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.356 [2024-04-26 16:10:27.035031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.356 [2024-04-26 16:10:27.035237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.356 [2024-04-26 16:10:27.035450] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.356 [2024-04-26 16:10:27.035480] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.356 [2024-04-26 16:10:27.035494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.038633] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.047478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.048045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.048394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.048411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.048422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.048624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.048822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.048834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.048843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.051947] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.060965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.061528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.061934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.061949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.061961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.062174] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.062381] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.062393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.062402] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.065526] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.074375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.074953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.075258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.075276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.075287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.075492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.075693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.075705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.075714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.078832] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.087822] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.088411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.088714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.088729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.088739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.088940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.089145] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.089158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.089167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.092275] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.101277] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.101889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.102244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.102259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.102270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.102469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.102672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.102683] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.102692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.105799] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.114796] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.115423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.115772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.115786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.115796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.115996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.116199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.116211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.116220] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.119320] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.128302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.128811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.129193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.129211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.129221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.129424] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.129623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.129634] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.129643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.132741] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.141716] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.142286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.142682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.142696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.142706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.142905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.143108] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.143123] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.143132] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.146230] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.155201] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.155809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.156105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.156120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.156131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.156330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.156528] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.156540] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.156549] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.159653] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.168626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.169214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.169564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.169579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.169589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.169788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.169987] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.169999] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.170008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.173114] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.182102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.182717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.183018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.183032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.183043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.183250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.183450] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.183462] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.183474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.186582] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.195572] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.196144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.196445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.196459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.196470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.196670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.196868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.196880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.196890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.199994] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.208985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.209577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.209927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.209941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.209951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.210156] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.210355] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.210367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.210375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.213484] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.222476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.222978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.223274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.223290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.223301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.223504] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.223704] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.223715] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.223727] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.226828] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 16:10:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:47.617 16:10:27 -- common/autotest_common.sh@850 -- # return 0 00:27:47.617 16:10:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:47.617 16:10:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:47.617 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:47.617 [2024-04-26 16:10:27.235983] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.236560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.236851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.236865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.617 [2024-04-26 16:10:27.236875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.617 [2024-04-26 16:10:27.237082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.617 [2024-04-26 16:10:27.237280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.617 [2024-04-26 16:10:27.237291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.617 [2024-04-26 16:10:27.237300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.617 [2024-04-26 16:10:27.240400] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.617 [2024-04-26 16:10:27.249389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.617 [2024-04-26 16:10:27.249895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.617 [2024-04-26 16:10:27.250208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.618 [2024-04-26 16:10:27.250224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.618 [2024-04-26 16:10:27.250234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.618 [2024-04-26 16:10:27.250434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.618 [2024-04-26 16:10:27.250631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.618 [2024-04-26 16:10:27.250643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.618 [2024-04-26 16:10:27.250651] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.618 [2024-04-26 16:10:27.253748] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.618 [2024-04-26 16:10:27.262724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.618 [2024-04-26 16:10:27.263289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.618 [2024-04-26 16:10:27.263637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.618 [2024-04-26 16:10:27.263651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.618 [2024-04-26 16:10:27.263661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.618 [2024-04-26 16:10:27.263861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.618 [2024-04-26 16:10:27.264059] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.618 [2024-04-26 16:10:27.264080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.618 [2024-04-26 16:10:27.264089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.618 16:10:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.618 16:10:27 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.618 16:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.618 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:47.618 [2024-04-26 16:10:27.267184] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.618 [2024-04-26 16:10:27.271636] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.618 [2024-04-26 16:10:27.276161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.618 [2024-04-26 16:10:27.276727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.618 [2024-04-26 16:10:27.277077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.618 [2024-04-26 16:10:27.277091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.618 [2024-04-26 16:10:27.277101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.618 [2024-04-26 16:10:27.277300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.618 [2024-04-26 16:10:27.277498] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.618 [2024-04-26 16:10:27.277509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.618 [2024-04-26 16:10:27.277517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.618 [2024-04-26 16:10:27.280618] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.618 [2024-04-26 16:10:27.289590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.618 [2024-04-26 16:10:27.290141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.618 [2024-04-26 16:10:27.290557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.618 [2024-04-26 16:10:27.290571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.618 [2024-04-26 16:10:27.290581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.618 [2024-04-26 16:10:27.290779] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.618 [2024-04-26 16:10:27.290976] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.618 [2024-04-26 16:10:27.290987] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.618 [2024-04-26 16:10:27.290996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.618 16:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.618 16:10:27 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:47.618 16:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.618 [2024-04-26 16:10:27.294134] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.618 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:47.877 [2024-04-26 16:10:27.303018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.878 [2024-04-26 16:10:27.303693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.304101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.304121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.878 [2024-04-26 16:10:27.304133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.878 [2024-04-26 16:10:27.304336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.878 [2024-04-26 16:10:27.304535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.878 [2024-04-26 16:10:27.304546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.878 [2024-04-26 16:10:27.304555] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.878 [2024-04-26 16:10:27.307678] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.878 [2024-04-26 16:10:27.316528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.878 [2024-04-26 16:10:27.317191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.317613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.317628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.878 [2024-04-26 16:10:27.317639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.878 [2024-04-26 16:10:27.317848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.878 [2024-04-26 16:10:27.318049] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.878 [2024-04-26 16:10:27.318060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.878 [2024-04-26 16:10:27.318074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.878 [2024-04-26 16:10:27.321196] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.878 [2024-04-26 16:10:27.330035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.878 [2024-04-26 16:10:27.330707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.331045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.331059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.878 [2024-04-26 16:10:27.331075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.878 [2024-04-26 16:10:27.331277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.878 [2024-04-26 16:10:27.331476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.878 [2024-04-26 16:10:27.331487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.878 [2024-04-26 16:10:27.331496] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.878 [2024-04-26 16:10:27.334604] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.878 [2024-04-26 16:10:27.343430] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.878 [2024-04-26 16:10:27.343923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.344340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.344355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.878 [2024-04-26 16:10:27.344369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.878 [2024-04-26 16:10:27.344570] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.878 [2024-04-26 16:10:27.344769] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.878 [2024-04-26 16:10:27.344780] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.878 [2024-04-26 16:10:27.344789] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.878 [2024-04-26 16:10:27.347897] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.878 [2024-04-26 16:10:27.356896] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.878 [2024-04-26 16:10:27.357556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.357839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.357853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.878 [2024-04-26 16:10:27.357863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.878 [2024-04-26 16:10:27.358064] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.878 [2024-04-26 16:10:27.358269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.878 [2024-04-26 16:10:27.358280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.878 [2024-04-26 16:10:27.358289] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.878 [2024-04-26 16:10:27.361398] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.878 [2024-04-26 16:10:27.370370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.878 [2024-04-26 16:10:27.371030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.371395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.371410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.878 [2024-04-26 16:10:27.371420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.878 [2024-04-26 16:10:27.371621] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.878 [2024-04-26 16:10:27.371819] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.878 [2024-04-26 16:10:27.371830] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.878 [2024-04-26 16:10:27.371839] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.878 [2024-04-26 16:10:27.374911] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.878 [2024-04-26 16:10:27.383713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.878 [2024-04-26 16:10:27.384370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.384790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.384804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.878 [2024-04-26 16:10:27.384813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.878 [2024-04-26 16:10:27.385015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.878 [2024-04-26 16:10:27.385220] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.878 [2024-04-26 16:10:27.385232] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.878 [2024-04-26 16:10:27.385240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.878 Malloc0 00:27:47.878 16:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.878 16:10:27 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:47.878 [2024-04-26 16:10:27.388347] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.878 16:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.878 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:47.878 [2024-04-26 16:10:27.397133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.878 [2024-04-26 16:10:27.397797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.398219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.878 [2024-04-26 16:10:27.398235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005a40 with addr=10.0.0.2, port=4420 00:27:47.878 [2024-04-26 16:10:27.398245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:27:47.878 [2024-04-26 16:10:27.398447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.878 [2024-04-26 16:10:27.398646] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.878 [2024-04-26 16:10:27.398657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.878 [2024-04-26 16:10:27.398666] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.878 16:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.878 16:10:27 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.878 16:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.878 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:47.879 [2024-04-26 16:10:27.401764] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.879 16:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.879 16:10:27 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.879 16:10:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.879 16:10:27 -- common/autotest_common.sh@10 -- # set +x 00:27:47.879 [2024-04-26 16:10:27.410552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.879 [2024-04-26 16:10:27.411043] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.879 [2024-04-26 16:10:27.411201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:47.879 16:10:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.879 16:10:27 -- host/bdevperf.sh@38 -- # wait 2598967 00:27:47.879 [2024-04-26 16:10:27.416532] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:27:47.879 [2024-04-26 16:10:27.416584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (107): Transport endpoint is not connected 00:27:47.879 [2024-04-26 16:10:27.416892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:27:47.879 [2024-04-26 16:10:27.417100] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:47.879 [2024-04-26 16:10:27.417113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:47.879 [2024-04-26 16:10:27.417125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:47.879 [2024-04-26 16:10:27.420232] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:47.879 [2024-04-26 16:10:27.423902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:47.879 [2024-04-26 16:10:27.453351] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:57.855 00:27:57.855 Latency(us) 00:27:57.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.855 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:57.855 Verification LBA range: start 0x0 length 0x4000 00:27:57.855 Nvme1n1 : 15.01 7027.80 27.45 10840.63 0.00 7140.91 1246.61 35788.35 00:27:57.855 =================================================================================================================== 00:27:57.855 Total : 7027.80 27.45 10840.63 0.00 7140.91 1246.61 35788.35 00:27:57.855 16:10:37 -- host/bdevperf.sh@39 -- # sync 00:27:57.855 16:10:37 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.855 16:10:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.855 16:10:37 -- common/autotest_common.sh@10 -- # set +x 00:27:57.855 16:10:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.855 16:10:37 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:57.855 16:10:37 -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:57.855 16:10:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:57.855 16:10:37 -- nvmf/common.sh@117 -- # sync 00:27:57.855 16:10:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.855 16:10:37 -- nvmf/common.sh@120 -- # set +e 00:27:57.855 16:10:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.855 16:10:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.855 rmmod nvme_tcp 00:27:57.855 rmmod nvme_fabrics 00:27:57.855 rmmod nvme_keyring 00:27:57.855 16:10:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.855 16:10:37 -- nvmf/common.sh@124 -- # set -e 00:27:57.855 16:10:37 -- nvmf/common.sh@125 -- # return 0 00:27:57.855 16:10:37 -- nvmf/common.sh@478 -- # '[' -n 2599893 ']' 00:27:57.855 16:10:37 -- nvmf/common.sh@479 -- # killprocess 2599893 00:27:57.855 16:10:37 -- common/autotest_common.sh@936 -- # '[' -z 2599893 ']' 00:27:57.855 16:10:37 -- common/autotest_common.sh@940 -- # kill -0 2599893 00:27:57.855 16:10:37 -- common/autotest_common.sh@941 -- # uname 00:27:57.855 16:10:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:57.855 16:10:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2599893 00:27:57.855 16:10:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:57.855 16:10:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:57.855 16:10:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2599893' 00:27:57.855 killing process with pid 2599893 00:27:57.855 16:10:37 -- common/autotest_common.sh@955 -- # kill 2599893 00:27:57.855 16:10:37 -- common/autotest_common.sh@960 -- # wait 2599893 00:27:59.757 16:10:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:59.757 16:10:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:59.757 16:10:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:59.757 16:10:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.757 16:10:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:59.757 16:10:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.757 16:10:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.757 16:10:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.663 16:10:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:01.663 00:28:01.663 real 0m30.433s 00:28:01.663 user 1m15.753s 00:28:01.663 sys 0m6.763s 00:28:01.663 16:10:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:01.663 16:10:41 -- common/autotest_common.sh@10 -- # set +x 00:28:01.663 ************************************ 00:28:01.663 END TEST nvmf_bdevperf 00:28:01.663 ************************************ 00:28:01.663 16:10:41 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:01.663 16:10:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:01.663 16:10:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:01.663 16:10:41 -- common/autotest_common.sh@10 -- # set +x 00:28:01.663 ************************************ 00:28:01.663 START TEST nvmf_target_disconnect 00:28:01.663 ************************************ 00:28:01.663 16:10:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:01.663 * Looking for test storage... 00:28:01.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.663 16:10:41 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.663 16:10:41 -- nvmf/common.sh@7 -- # uname -s 00:28:01.663 16:10:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.663 16:10:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.663 16:10:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.663 16:10:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.663 16:10:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.663 16:10:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.663 16:10:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.663 16:10:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.663 16:10:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.663 16:10:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.663 16:10:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:01.663 16:10:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:01.663 16:10:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.663 16:10:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.663 16:10:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.663 16:10:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.663 16:10:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.663 16:10:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.663 16:10:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.663 16:10:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.663 16:10:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.663 16:10:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.663 16:10:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.663 16:10:41 -- paths/export.sh@5 -- # export PATH 00:28:01.663 16:10:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.663 16:10:41 -- nvmf/common.sh@47 -- # : 0 00:28:01.663 16:10:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.663 16:10:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.663 16:10:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.663 16:10:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.663 16:10:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.663 16:10:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.663 16:10:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.663 16:10:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.663 16:10:41 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:01.663 16:10:41 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:01.663 16:10:41 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:01.663 16:10:41 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:01.663 16:10:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:01.663 16:10:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.663 16:10:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:01.663 16:10:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:01.663 16:10:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:01.663 16:10:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.663 16:10:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.663 16:10:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.663 16:10:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:01.664 16:10:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:01.664 16:10:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.664 16:10:41 -- common/autotest_common.sh@10 -- # set +x 00:28:06.935 16:10:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:06.935 16:10:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:06.935 16:10:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:06.935 16:10:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:06.935 16:10:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:06.935 16:10:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:06.935 16:10:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:06.935 16:10:46 -- nvmf/common.sh@295 -- # net_devs=() 00:28:06.935 16:10:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:06.935 16:10:46 -- nvmf/common.sh@296 -- # e810=() 00:28:06.935 16:10:46 -- nvmf/common.sh@296 -- # local -ga e810 00:28:06.935 16:10:46 -- nvmf/common.sh@297 -- # x722=() 00:28:06.935 16:10:46 -- nvmf/common.sh@297 -- # local -ga x722 00:28:06.935 16:10:46 -- nvmf/common.sh@298 -- # mlx=() 00:28:06.935 16:10:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:06.935 16:10:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.935 16:10:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:06.935 16:10:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:06.935 16:10:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:06.935 16:10:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:06.935 16:10:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:06.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:06.935 16:10:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:06.935 16:10:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:06.935 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:06.935 16:10:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:06.935 16:10:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:06.935 16:10:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:06.936 16:10:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:06.936 16:10:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:06.936 16:10:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.936 16:10:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:06.936 16:10:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.936 16:10:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:06.936 Found net devices under 0000:86:00.0: cvl_0_0 00:28:06.936 16:10:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.936 16:10:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:06.936 16:10:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.936 16:10:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:06.936 16:10:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.936 16:10:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:06.936 Found net devices under 0000:86:00.1: cvl_0_1 00:28:06.936 16:10:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.936 16:10:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:06.936 16:10:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:06.936 16:10:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:06.936 16:10:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:06.936 16:10:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:06.936 16:10:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.936 16:10:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.936 16:10:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.936 16:10:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:06.936 16:10:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.936 16:10:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.936 16:10:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:06.936 16:10:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.936 16:10:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.936 16:10:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:06.936 16:10:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:06.936 16:10:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.936 16:10:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.936 16:10:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.936 16:10:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.936 16:10:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:06.936 16:10:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.936 16:10:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.936 16:10:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.194 16:10:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:07.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:28:07.194 00:28:07.194 --- 10.0.0.2 ping statistics --- 00:28:07.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.194 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:28:07.194 16:10:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:28:07.194 00:28:07.194 --- 10.0.0.1 ping statistics --- 00:28:07.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.195 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:28:07.195 16:10:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.195 16:10:46 -- nvmf/common.sh@411 -- # return 0 00:28:07.195 16:10:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:07.195 16:10:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.195 16:10:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:07.195 16:10:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:07.195 16:10:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.195 16:10:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:07.195 16:10:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:07.195 16:10:46 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:07.195 16:10:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:07.195 16:10:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:07.195 16:10:46 -- common/autotest_common.sh@10 -- # set +x 00:28:07.195 ************************************ 00:28:07.195 START TEST nvmf_target_disconnect_tc1 00:28:07.195 ************************************ 00:28:07.195 16:10:46 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:28:07.195 16:10:46 -- host/target_disconnect.sh@32 -- # set +e 00:28:07.195 16:10:46 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:07.454 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.454 [2024-04-26 16:10:46.965450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.454 [2024-04-26 16:10:46.965863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.454 [2024-04-26 16:10:46.965885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002040 with addr=10.0.0.2, port=4420 00:28:07.454 [2024-04-26 16:10:46.965947] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:07.454 [2024-04-26 16:10:46.965966] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:07.454 [2024-04-26 16:10:46.965978] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:07.454 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:07.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:07.454 Initializing NVMe Controllers 00:28:07.454 16:10:46 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:07.454 16:10:46 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:07.454 16:10:46 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:28:07.454 16:10:46 -- common/autotest_common.sh@1139 -- # return 0 00:28:07.454 16:10:46 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:07.454 16:10:46 -- host/target_disconnect.sh@41 -- # set -e 00:28:07.454 00:28:07.454 real 0m0.181s 00:28:07.454 user 0m0.070s 00:28:07.454 sys 0m0.110s 00:28:07.454 16:10:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:07.454 16:10:46 -- common/autotest_common.sh@10 -- # set +x 00:28:07.454 ************************************ 00:28:07.454 END TEST nvmf_target_disconnect_tc1 00:28:07.454 ************************************ 00:28:07.454 16:10:47 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:07.454 16:10:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:07.454 16:10:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:07.454 16:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:07.713 ************************************ 00:28:07.713 START TEST nvmf_target_disconnect_tc2 00:28:07.713 ************************************ 00:28:07.713 16:10:47 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:28:07.713 16:10:47 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:28:07.713 16:10:47 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:07.713 16:10:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:07.713 16:10:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:07.713 16:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:07.713 16:10:47 -- nvmf/common.sh@470 -- # nvmfpid=2605524 00:28:07.713 16:10:47 -- nvmf/common.sh@471 -- # waitforlisten 2605524 00:28:07.713 16:10:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:07.713 16:10:47 -- common/autotest_common.sh@817 -- # '[' -z 2605524 ']' 00:28:07.713 16:10:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.713 16:10:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:07.713 16:10:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.713 16:10:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:07.713 16:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:07.713 [2024-04-26 16:10:47.253460] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:07.713 [2024-04-26 16:10:47.253538] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.713 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.713 [2024-04-26 16:10:47.374829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:07.972 [2024-04-26 16:10:47.591406] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.972 [2024-04-26 16:10:47.591457] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.972 [2024-04-26 16:10:47.591467] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.972 [2024-04-26 16:10:47.591476] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.972 [2024-04-26 16:10:47.591500] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.972 [2024-04-26 16:10:47.592043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:07.972 [2024-04-26 16:10:47.592145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:07.972 [2024-04-26 16:10:47.592275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:07.972 [2024-04-26 16:10:47.592248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:08.539 16:10:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:08.539 16:10:48 -- common/autotest_common.sh@850 -- # return 0 00:28:08.539 16:10:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:08.539 16:10:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:08.539 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:08.539 16:10:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.540 16:10:48 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:08.540 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.540 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 Malloc0 00:28:08.540 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.540 16:10:48 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:08.540 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.540 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 [2024-04-26 16:10:48.159720] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.540 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.540 16:10:48 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.540 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.540 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.540 16:10:48 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.540 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.540 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.540 16:10:48 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.540 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.540 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 [2024-04-26 16:10:48.187976] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.540 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.540 16:10:48 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:08.540 16:10:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.540 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:08.540 16:10:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.540 16:10:48 -- host/target_disconnect.sh@50 -- # reconnectpid=2605573 00:28:08.540 16:10:48 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:08.540 16:10:48 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:08.798 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.704 16:10:50 -- host/target_disconnect.sh@53 -- # kill -9 2605524 00:28:10.704 16:10:50 -- host/target_disconnect.sh@55 -- # sleep 2 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.704 Read completed with error (sct=0, sc=8) 00:28:10.704 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 [2024-04-26 16:10:50.229593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 [2024-04-26 16:10:50.229994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Read completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 Write completed with error (sct=0, sc=8) 00:28:10.705 starting I/O failed 00:28:10.705 [2024-04-26 16:10:50.230356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:10.705 [2024-04-26 16:10:50.230780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.231243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.231293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.705 qpair failed and we were unable to recover it. 00:28:10.705 [2024-04-26 16:10:50.231697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.231875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.231916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.705 qpair failed and we were unable to recover it. 00:28:10.705 [2024-04-26 16:10:50.232280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.232653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.232693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.705 qpair failed and we were unable to recover it. 00:28:10.705 [2024-04-26 16:10:50.233110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.233507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.233546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.705 qpair failed and we were unable to recover it. 00:28:10.705 [2024-04-26 16:10:50.233948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.234276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.705 [2024-04-26 16:10:50.234344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.705 qpair failed and we were unable to recover it. 00:28:10.705 [2024-04-26 16:10:50.234747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.235156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.235197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.235595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.235983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.236023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.236533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.236918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.236957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.237307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.237625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.237664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.237854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.238087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.238128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.238589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.239049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.239099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.239486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.239778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.239792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.239910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.240199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.240214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.240573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.240901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.240940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.241420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.241873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.241913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.242289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.242689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.242728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.243170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.243498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.243537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.243913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.244268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.244309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.244705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.245078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.245092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.245439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.245714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.245753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.246194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.246575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.246614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.247023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.247419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.247460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.247848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.248121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.248137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.248464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.248783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.248797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.249148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.249480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.249494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.249822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.250115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.250130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.250405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.250777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.250791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.250992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.251262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.251277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.251550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.251820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.251833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.252195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.252519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.252559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.252890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.253283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.253331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.706 [2024-04-26 16:10:50.253663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.254007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.706 [2024-04-26 16:10:50.254045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.706 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.254439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.254813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.254827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.255161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.255554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.255594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.255912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.256316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.256357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.256871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.257248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.257289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.257607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.257919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.257959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.258397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.258747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.258761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.259110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.259426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.259466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.259846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.260151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.260193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.260515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.260970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.261010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.261496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.261933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.261973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.262368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.262813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.262827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.263175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.263634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.263674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.264035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.264465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.264506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.264837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.265176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.265217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.265561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.265953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.265992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.266452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.266692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.266731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.267186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.267642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.267686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.268048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.268474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.268514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.268810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.269133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.269147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.269478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.269910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.269949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.270411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.270789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.270828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.271274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.271684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.271723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.272065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.272458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.272498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.272911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.273253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.273295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.273675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.274088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.274128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.274503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.274955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.274993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.707 qpair failed and we were unable to recover it. 00:28:10.707 [2024-04-26 16:10:50.275436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.707 [2024-04-26 16:10:50.275694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.275733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.276128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.276577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.276616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.276935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.277320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.277358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.277724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.278175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.278216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.278558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.278969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.279009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.279442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.279820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.279865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.280257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.280729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.280769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.281144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.281599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.281638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.282099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.282510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.282550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.282890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.283322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.283363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.283805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.284238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.284278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.284696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.285061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.285109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.285522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.285903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.285942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.286320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.286629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.286668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.287054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.287425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.287464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.287856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.288315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.288356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.288800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.289277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.289318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.289698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.290034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.290082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.290545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.290914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.290961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.291270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.291733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.291773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.292175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.292460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.292499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.292904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.293335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.293375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.293776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.294151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.294190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.294513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.294970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.295009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.295351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.295687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.295726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.296119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.296449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.296488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.708 [2024-04-26 16:10:50.296966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.297415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.708 [2024-04-26 16:10:50.297456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.708 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.297887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.298259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.298300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.298728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.299052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.299110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.299566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.299904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.299942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.300324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.300718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.300758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.301160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.301560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.301599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.301995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.302403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.302443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.302910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.303305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.303345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.303740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.304180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.304221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.304618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.305003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.305043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.305460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.305918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.305957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.306229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.306614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.306653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.307052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.307395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.307435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.307808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.308173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.308214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.308644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.309055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.309104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.309589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.309893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.309932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.310394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.310849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.310888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.311263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.311482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.311520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.311861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.312316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.312356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.312743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.313158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.313173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.313539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.313824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.313863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.314272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.314622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.314662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.709 qpair failed and we were unable to recover it. 00:28:10.709 [2024-04-26 16:10:50.315096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.709 [2024-04-26 16:10:50.315539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.315579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.315992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.316396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.316437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.316820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.317170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.317209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.317590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.318032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.318079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.318528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.318970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.318992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.319239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.319589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.319604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.320000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.320361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.320376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.320710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.321168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.321216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.321506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.321954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.321993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.322365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.322685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.322725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.323124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.323458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.323472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.323824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.324255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.324296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.324553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.324931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.324970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.325307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.325762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.325801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.326099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.326474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.326513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.326953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.327398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.327438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.327771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.328068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.328132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.328557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.328922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.328967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.329342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.329773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.329813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.330280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.330724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.330763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.331135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.331596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.331636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.331997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.332453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.332550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.332944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.333250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.333292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.333743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.334111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.334151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.334411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.334817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.334855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.335258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.335633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.335673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.336050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.336458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.710 [2024-04-26 16:10:50.336497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.710 qpair failed and we were unable to recover it. 00:28:10.710 [2024-04-26 16:10:50.336935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.337247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.337294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.337743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.338135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.338176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.338592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.338924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.338963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.339405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.339847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.339860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.340261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.340575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.340615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.341177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.341570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.341609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.342000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.342382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.342422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.342854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.343225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.343266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.343714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.344168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.344208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.344675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.345133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.345174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.345569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.345984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.346000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.346421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.346806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.346845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.347236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.347690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.347729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.348207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.348662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.348701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.349101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.349535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.349575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.350044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.350521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.350561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.351066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.351531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.351570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.352046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.352374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.352414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.352863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.353235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.353276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.353645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.354095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.354109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.354502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.354886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.354926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.355295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.355630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.355670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.356063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.356496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.356535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.357004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.357403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.357443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.357933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.358361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.358401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.358850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.359318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.359359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.711 qpair failed and we were unable to recover it. 00:28:10.711 [2024-04-26 16:10:50.359770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.711 [2024-04-26 16:10:50.360174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.360215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.360684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.361098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.361139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.361467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.361838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.361877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.362266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.362743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.362781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.363278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.363692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.363731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.364154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.364616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.364655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.365095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.365489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.365528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.366048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.366518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.366559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.366950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.367387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.367428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.367804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.368219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.368260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.368735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.369097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.369138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.369568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.370058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.370108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.370559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.371014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.371053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.371553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.371940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.371953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.372381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.372704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.372743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.373225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.373616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.373655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.374099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.374489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.374528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.374969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.375373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.375414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.375866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.376267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.376308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.376731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.377135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.377150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.377543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.377951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.377965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.378320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.378766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.378805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.379313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.379716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.379755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.380177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.380620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.380659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.381163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.381662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.712 [2024-04-26 16:10:50.381702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.712 qpair failed and we were unable to recover it. 00:28:10.712 [2024-04-26 16:10:50.382123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.980 [2024-04-26 16:10:50.382547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.980 [2024-04-26 16:10:50.382561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.980 qpair failed and we were unable to recover it. 00:28:10.980 [2024-04-26 16:10:50.382975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.383275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.383290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.383630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.384067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.384087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.384526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.384874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.384912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.385373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.385825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.385864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.386318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.386662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.386676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.387082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.387483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.387523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.388000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.388432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.388473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.388921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.389372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.389413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.389828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.390201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.390242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.390650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.390979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.391018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.391492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.391872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.391917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.392332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.392786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.392825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.393268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.393678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.393717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.394183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.394575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.394615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.395018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.395475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.395490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.395904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.396360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.396401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.396816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.397288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.397329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.397680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.398097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.398112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.398501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.398915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.398929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.399341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.399669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.399709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.400171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.400537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.400576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.400980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.401391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.401406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.401770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.402211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.402252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.402667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.402991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.403030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.403491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.403926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.403965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.404461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.404898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.404938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.981 [2024-04-26 16:10:50.405365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.405745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.981 [2024-04-26 16:10:50.405785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.981 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.406238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.406675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.406715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.407197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.407648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.407688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.408108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.408574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.408614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.409031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.409499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.409513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.409969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.410391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.410433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.410769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.411195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.411209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.411591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.412007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.412046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.412513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.412968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.413007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.413433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.413892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.413931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.414402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.414774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.414813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.415291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.415749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.415789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.416258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.416664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.416704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.417131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.417531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.417571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.418054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.418506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.418546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.418888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.419325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.419339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.419759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.420147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.420161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.420526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.420971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.421011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.421497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.421883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.421923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.422342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.422738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.422778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.423244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.423704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.423743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.424212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.424524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.424563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.425033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.425433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.425473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.425956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.426290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.426331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.426722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.427159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.427200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.427619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.428111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.428153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.428486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.428864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.428903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.982 qpair failed and we were unable to recover it. 00:28:10.982 [2024-04-26 16:10:50.429286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.982 [2024-04-26 16:10:50.429743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.429782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.430277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.430682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.430721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.431137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.431615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.431655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.432081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.432542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.432582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.433058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.433522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.433536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.433898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.434341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.434382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.434782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.435276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.435318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.435836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.436231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.436273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.436684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.437167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.437215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.437732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.438199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.438240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.438661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.439125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.439167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.439561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.440000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.440039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.440469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.440841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.440880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.441345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.441725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.441765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.442226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.442630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.442669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.443115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.443579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.443618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.444142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.444547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.444587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.445064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.445485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.445537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.446016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.446433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.446474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.446957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.447354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.447369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.447713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.448097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.448137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.448558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.448899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.448912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.449309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.449690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.449729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.450201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.450664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.450703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.451225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.451627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.451667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.452065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.452560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.452600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.453004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.453406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.983 [2024-04-26 16:10:50.453453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.983 qpair failed and we were unable to recover it. 00:28:10.983 [2024-04-26 16:10:50.453933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.454344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.454384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.454791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.455187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.455228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.455697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.456161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.456202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.456692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.457131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.457173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.457580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.457928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.457968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.458425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.458890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.458930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.459402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.459793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.459833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.460309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.460716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.460756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.461157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.461543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.461582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.462093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.462487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.462533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.463094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.463634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.463674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.464155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.464573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.464612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.465091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.465532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.465571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.465979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.466458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.466500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.466975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.467457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.467498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.467966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.468386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.468428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.468902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.469368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.469410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.469844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.470313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.470353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.470835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.471298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.471339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.471734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.472149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.472197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.472594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.473058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.473121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.473556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.473973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.474013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.474494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.474885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.474925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.475396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.475865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.475905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.476329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.476795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.476835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.477225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.477690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.477731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.478205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.478667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.984 [2024-04-26 16:10:50.478708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.984 qpair failed and we were unable to recover it. 00:28:10.984 [2024-04-26 16:10:50.479118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.479603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.479643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.480123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.480531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.480570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.481054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.481498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.481517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.481861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.482308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.482324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.482746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.483116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.483132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.483537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.483953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.483968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.484311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.484680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.484696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.485130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.485465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.485479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.485821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.486243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.486259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.486658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.487066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.487087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.487372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.487808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.487823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.488193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.488616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.488631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.488977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.489424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.489439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.489800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.490243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.490258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.490669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.491026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.491041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.491485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.491821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.491837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.492113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.492541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.492559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.492976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.493397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.493413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.493764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.494103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.494120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.494476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.494914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.494933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.495361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.495784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.495799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.985 qpair failed and we were unable to recover it. 00:28:10.985 [2024-04-26 16:10:50.496161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.985 [2024-04-26 16:10:50.496610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.496625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.496917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.497341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.497357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.497715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.498056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.498075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.498494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.498902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.498917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.499265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.499592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.499608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.500028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.500447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.500462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.500757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.501110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.501125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.501526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.501866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.501881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.502305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.502721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.502737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.503138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.503553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.503569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.503923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.504366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.504382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.504676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.505094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.505109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.505391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.505735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.505751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.506086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.506436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.506451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.506894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.507224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.507240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.507635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.507985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.508000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.508462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.508805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.508820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.509269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.509678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.509694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.510113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.510528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.510547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.510992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.511328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.511355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.511810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.512216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.512238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.512615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.513034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.513058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.513498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.513933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.513957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.514310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.514746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.514769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.515124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.515521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.515548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.515972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.516374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.516404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.516748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.517181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.517196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.986 qpair failed and we were unable to recover it. 00:28:10.986 [2024-04-26 16:10:50.517623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.986 [2024-04-26 16:10:50.518036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.518051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.518508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.518838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.518852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.519277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.519688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.519702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.520028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.520443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.520458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.520801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.521139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.521154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.521574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.521913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.521927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.522348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.522737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.522752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.523194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.523557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.523573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.523962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.524294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.524309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.524688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.525045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.525059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.525346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.525711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.525725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.526140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.526476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.526491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.526826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.527240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.527255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.527579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.527902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.527917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.528281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.528694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.528708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.529102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.529516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.529530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.529920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.530255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.530271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.530683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.531078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.531093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.531509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.531898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.531913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.532302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.532720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.532735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.533154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.533514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.533529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.533856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.534272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.534287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.534710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.535081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.535096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.535515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.535935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.535950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.536290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.536700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.536715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.537130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.537521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.537535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.537887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.538274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.987 [2024-04-26 16:10:50.538288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.987 qpair failed and we were unable to recover it. 00:28:10.987 [2024-04-26 16:10:50.538707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.539113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.539127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.539471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.539859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.539873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.540205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.540555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.540594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.541067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.541477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.541517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.541998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.542414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.542455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.542795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.543232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.543273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.543691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.544147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.544188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.544674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.545131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.545172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.545528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.545928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.545967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.546455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.546831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.546870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.547339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.547799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.547839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.548194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.548596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.548635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.549098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.549480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.549519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.549960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.550334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.550376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.550844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.551231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.551272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.551649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.552113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.552155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.552584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.552962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.553022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.553441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.553831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.553871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.554215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.554688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.554729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.555136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.555588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.555627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.556118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.556533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.556572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.557018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.557493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.557534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.558011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.558406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.558449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.558882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.559288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.559329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.559820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.560201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.560215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.560646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.561106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.561147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.988 [2024-04-26 16:10:50.561619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.562085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.988 [2024-04-26 16:10:50.562126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.988 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.562603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.562979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.563019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.563497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.563962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.564002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.564452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.564898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.564913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.565340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.565746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.565785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.566262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.566699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.566739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.567181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.567640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.567680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.568083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.568546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.568585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.569069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.569419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.569458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.569933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.570273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.570314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.570790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.571191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.571232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.571697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.572091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.572132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.572551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.573028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.573068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.573426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.573822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.573861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.574259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.574733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.574773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.575266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.575733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.575773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.576187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.576507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.576544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.576943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.577299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.577340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.577771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.578180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.578222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.578624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.579124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.579166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.579675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.580180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.580194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.580548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.580961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.581000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.581486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.581873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.581888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.582170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.582517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.582531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.582935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.583355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.583392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.583737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.584169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.584210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.584670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.584997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.585037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.989 qpair failed and we were unable to recover it. 00:28:10.989 [2024-04-26 16:10:50.585528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.585908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.989 [2024-04-26 16:10:50.585948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.586421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.586834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.586874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.587297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.587739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.587780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.588170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.588497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.588536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.588937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.589405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.589446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.589881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.590256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.590304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.590812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.591264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.591305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.591712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.592152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.592193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.592594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.592994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.593033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.593453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.593904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.593944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.594428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.594827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.594841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.595244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.595723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.595764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.596238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.596656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.596695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.597027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.597359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.597400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.597824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.598293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.598338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.598813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.599269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.599300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.599678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.600096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.600137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.600533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.600945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.600960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.601343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.601689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.601729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.602198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.602582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.602622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.603098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.603545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.603585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.604006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.604425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.604466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.604951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.605370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.605411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.990 qpair failed and we were unable to recover it. 00:28:10.990 [2024-04-26 16:10:50.605840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.990 [2024-04-26 16:10:50.606304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.606346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.606770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.607152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.607193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.607597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.608107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.608155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.608588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.608995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.609035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.609457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.609846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.609886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.610372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.610720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.610760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.611104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.611499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.611539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.612001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.612474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.612515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.612933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.613411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.613452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.613898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.614344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.614387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.614792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.615255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.615297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.615836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.616328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.616369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.616743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.617185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.617235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.617694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.618160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.618203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.618681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.619087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.619128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.619571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.620030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.620081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.620587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.621093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.621135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.621644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.621982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.622022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.622448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.622921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.622962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.623517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.623910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.623950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.624436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.624868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.624882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.625314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.625782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.625822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.626275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.626723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.626764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.627233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.627652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.627667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.628106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.628600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.628641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.629160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.629637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.629678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.630203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.630700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.630740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.991 qpair failed and we were unable to recover it. 00:28:10.991 [2024-04-26 16:10:50.631252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.991 [2024-04-26 16:10:50.631675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.631690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.632048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.632510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.632552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.632912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.633312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.633354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.633822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.634292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.634335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.634835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.635303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.635347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.635881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.636293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.636335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.636773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.637185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.637227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.637738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.638240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.638283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.638747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.639200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.639242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.639657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.640136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.640179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.640610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.641093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.641136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.641645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.642093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.642136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.642605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.643051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.643104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.643659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.644054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.644068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.644522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.644929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.644970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.645459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.645944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.645984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.646504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.646898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.646913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.647261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.647641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.647681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.648161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.648665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.648706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.649201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.649643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.649658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.649955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.650381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.650397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.650780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.651254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.651296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.651745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.652134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.652177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.652670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.653078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.653095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.653522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.653997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.654038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:10.992 [2024-04-26 16:10:50.654473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.654772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.992 [2024-04-26 16:10:50.654788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:10.992 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.655088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.655487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.655502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.655855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.656228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.656244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.656651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.657046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.657098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.657590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.658060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.658110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.658628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.658964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.659004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.659436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.659830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.659871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.660345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.660805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.660845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.661242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.661691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.661731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.662231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.662710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.662750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.663222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.663643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.663695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.664119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.664584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.664600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.665053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.665574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.665615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.666031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.666514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.666556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.666944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.667291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.667333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.667840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.668305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.668352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.668865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.669312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.669380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.669724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.670139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.670182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.670662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.671047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.671096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.671573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.671976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.672016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.258 qpair failed and we were unable to recover it. 00:28:11.258 [2024-04-26 16:10:50.672490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.258 [2024-04-26 16:10:50.672968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.672983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.673358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.673760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.673800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.674312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.674706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.674747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.675236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.675631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.675672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.676003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.676411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.676453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.676802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.677215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.677258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.677742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.678164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.678180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.678605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.679047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.679110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.679507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.679970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.679984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.680395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.680852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.680893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.681295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.681789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.681828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.682368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.682816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.682856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.683340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.683714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.683729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.684155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.684625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.684665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.685192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.685690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.685731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.686154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.686624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.686665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.687146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.687542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.687581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.688089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.688483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.688524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.688925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.689383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.689425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.689936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.690332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.690374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.690845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.691312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.691354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.691778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.692260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.692301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.692793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.693207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.693250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.693787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.694233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.694275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.694768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.695261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.695303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.695714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.696039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.696077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.259 [2024-04-26 16:10:50.696425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.696849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.259 [2024-04-26 16:10:50.696887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.259 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.697347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.697821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.697861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.698307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.698782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.698823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.699265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.699711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.699762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.700201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.700610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.700651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.701161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.701512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.701553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.702041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.702445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.702486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.702940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.703417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.703458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.703945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.704343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.704383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.704866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.705260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.705303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.705702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.706107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.706150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.706630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.707086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.707128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.707660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.708135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.708177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.708671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.709116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.709160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.709615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.710093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.710135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.710566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.711044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.711097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.711608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.712052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.712103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.712577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.712995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.713036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.713501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.713970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.714010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.714446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.714937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.714977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.715470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.715957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.715997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.716511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.716849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.716889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.717364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.717814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.717854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.718355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.718690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.718731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.719195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.719625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.719665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.720107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.720561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.720602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.721119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.721568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.721608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.260 qpair failed and we were unable to recover it. 00:28:11.260 [2024-04-26 16:10:50.722064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.722554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.260 [2024-04-26 16:10:50.722594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.723053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.723535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.723575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.724087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.724511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.724551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.724959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.725363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.725404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.725854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.726341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.726382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.726863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.727335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.727377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.727900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.728369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.728417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.728869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.729318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.729360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.729793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.730218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.730261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.730672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.731038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.731054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.731428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.731776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.731792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.732226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.732647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.732662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.733084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.733417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.733433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.733807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.734185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.734201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.734537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.734908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.734923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.735294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.735659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.735675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.735972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.736315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.736332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.736690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.737128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.737144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.737428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.737832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.737849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.738277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.738705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.738722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.739101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.739443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.739458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.739817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.740243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.740259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.740635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.740987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.741004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.741457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.741860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.741876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.742295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.742637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.742652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.743083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.743505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.743520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.743884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.744287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.744303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.261 qpair failed and we were unable to recover it. 00:28:11.261 [2024-04-26 16:10:50.744738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.261 [2024-04-26 16:10:50.745166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.745182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.745541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.745942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.745961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.746407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.746771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.746786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.747132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.747553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.747569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.747877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.748297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.748313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.748674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.749019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.749034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.749386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.749798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.749813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.750244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.750539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.750554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.750995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.751344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.751360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.751719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.752140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.752155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.752577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.752999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.753015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.753303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.753716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.753734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.754159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.754568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.754583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.754961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.755382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.755398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.755773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.756201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.756218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.756598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.756945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.756960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.757420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.757847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.757862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.758286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.758683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.758699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.759101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.759507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.759522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.759866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.760267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.760284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.760709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.761131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.761147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.761576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.761999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.762017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.762443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.762867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.762882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.763179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.763597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.763613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.763969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.764413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.764428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.764831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.765242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.765258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.765633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.766059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.766079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.262 qpair failed and we were unable to recover it. 00:28:11.262 [2024-04-26 16:10:50.766456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.262 [2024-04-26 16:10:50.766880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.766895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.767240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.767689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.767705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.768106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.768518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.768533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.768943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.769344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.769361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.769766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.770178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.770194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.770621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.771058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.771077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.771369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.771773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.771787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.772195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.772597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.772612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.772964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.773398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.773441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.773929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.774320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.774362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.774816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.775285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.775327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.775720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.776109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.776150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.776586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.777052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.777105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.777616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.778005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.778044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.778475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.778925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.778976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.779407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.779879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.779919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.780446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.780935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.780980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.781369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.781745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.781798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.782187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.782620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.782659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.783162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.783498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.783538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.784020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.784528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.784570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.263 [2024-04-26 16:10:50.785083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.785525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.263 [2024-04-26 16:10:50.785565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.263 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.785985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.786366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.786407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.786759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.787193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.787208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.787618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.787959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.787998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.788496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.788818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.788857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.789259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.789724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.789764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.790185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.790635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.790676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.791163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.791656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.791696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.792210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.792703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.792743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.793248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.793726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.793767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.794181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.794658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.794699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.795235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.795683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.795722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.796205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.796632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.796672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.797150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.797571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.797612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.798141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.798586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.798626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.799115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.799576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.799616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.800117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.800532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.800572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.801066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.801460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.801500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.801926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.802376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.802417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.802928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.803422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.803465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.803973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.804447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.804490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.805007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.805413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.805455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.805942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.806387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.806428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.806920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.807392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.807434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.807954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.808377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.808419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.808822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.809130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.809172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.809653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.810161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.810203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.264 [2024-04-26 16:10:50.810711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.811211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.264 [2024-04-26 16:10:50.811253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.264 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.811757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.812202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.812244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.812647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.813114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.813157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.813556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.814007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.814047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.814493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.814948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.814987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.815471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.815867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.815908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.816340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.816839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.816880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.817224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.817693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.817734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.818234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.818705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.818746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.819224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.819654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.819694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.820212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.820692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.820732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.821266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.821671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.821712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.822170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.822674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.822715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.823166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.823610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.823651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.824134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.824603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.824643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.825174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.825669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.825709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.826224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.826704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.826744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.827280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.827672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.827711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.828111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.828560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.828600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.829054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.829537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.829578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.830108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.830523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.830563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.830975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.831460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.831502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.831978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.832376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.832419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.832822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.833175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.833191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.833620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.834024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.834065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.834565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.834964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.835006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.835513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.835928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.835968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.265 qpair failed and we were unable to recover it. 00:28:11.265 [2024-04-26 16:10:50.836437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.265 [2024-04-26 16:10:50.836783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.836798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.837212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.837691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.837731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.838244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.838638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.838678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.839162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.839631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.839671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.840154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.840649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.840688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.841212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.841684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.841724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.842251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.842808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.842849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.843282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.843731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.843773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.844269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.844692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.844733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.845146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.845535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.845577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.846081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.846486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.846527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.847005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.847520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.847562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.848062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.848501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.848541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.849089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.849588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.849629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.850134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.850608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.850649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.851086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.851560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.851599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.852138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.852612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.852653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.853136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.853585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.853627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.854092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.854469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.854510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.854920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.855405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.855447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.855975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.856455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.856498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.856927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.857334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.857376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.857903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.858290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.858356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.858842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.859313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.859356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.859789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.860204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.860256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.860613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.861092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.861135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.861538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.862018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.266 [2024-04-26 16:10:50.862057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.266 qpair failed and we were unable to recover it. 00:28:11.266 [2024-04-26 16:10:50.862555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.862999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.863039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.863541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.863958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.863997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.864529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.865004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.865044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.865569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.866068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.866122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.866632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.867106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.867149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.867673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.868117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.868161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.868656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.869039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.869089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.869520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.869993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.870033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.870571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.871017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.871057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.871507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.871978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.872018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.872513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.872986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.873027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.873548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.874021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.874062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.874587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.875093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.875135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.875536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.875954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.876000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.876521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.877019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.877059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.877465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.877950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.877989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.878522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.878947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.878997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.879456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.879878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.879918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.880313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.880790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.880830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.881326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.881794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.881834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.882315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.882725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.882765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.883174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.883661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.883702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.884184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.884663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.884703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.885196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.885668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.885714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.886041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.886395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.886437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.886899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.887368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.887409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.267 qpair failed and we were unable to recover it. 00:28:11.267 [2024-04-26 16:10:50.887950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.888447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.267 [2024-04-26 16:10:50.888490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.268 qpair failed and we were unable to recover it. 00:28:11.268 [2024-04-26 16:10:50.888910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.268 [2024-04-26 16:10:50.889383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.268 [2024-04-26 16:10:50.889424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.268 qpair failed and we were unable to recover it. 00:28:11.268 [2024-04-26 16:10:50.889850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.268 [2024-04-26 16:10:50.890213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.890232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.890575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.890950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.890990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.891525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.891919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.891959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.892448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.892920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.892959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.893362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.893761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.893802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.894205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.894685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.894731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.895210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.895661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.895676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.896026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.896459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.896501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.896982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.897453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.897494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.898016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.898435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.898476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.898967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.899426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.899468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.899988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.900459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.900502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.900935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.901432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.901473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.901981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.902477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.902518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.903025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.903447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.903489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.903897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.904274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.904333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.904820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.905208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.905249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.905732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.906204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.906245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.906774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.907244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.907286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.907738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.908209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.908252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.908792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.909186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.909228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.909706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.910153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.910195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.910618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.911093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.911135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.911663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.912051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.912103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.912599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.912966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.913006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.913471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.913944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.913985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.914472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.914942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.269 [2024-04-26 16:10:50.914982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.269 qpair failed and we were unable to recover it. 00:28:11.269 [2024-04-26 16:10:50.915460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.915928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.915969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.916421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.916843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.916883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.917367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.917846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.917886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.918371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.918756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.918796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.919253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.919599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.919638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.920115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.920562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.920603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.921089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.921484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.921524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.922044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.922536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.922578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.923043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.923512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.923553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.924002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.924424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.924466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.924860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.925338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.925381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.925918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.926386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.926429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.926849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.927272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.927313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.927791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.928267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.928308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.928832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.929223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.929265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.929732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.930196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.930212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.930605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.930902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.930942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.931421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.931891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.931931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.932384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.932745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.932761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.933200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.933558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.933598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.270 [2024-04-26 16:10:50.934084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.934572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.270 [2024-04-26 16:10:50.934588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.270 qpair failed and we were unable to recover it. 00:28:11.538 [2024-04-26 16:10:50.935024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.538 [2024-04-26 16:10:50.935394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.538 [2024-04-26 16:10:50.935411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.538 qpair failed and we were unable to recover it. 00:28:11.538 [2024-04-26 16:10:50.935815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.538 [2024-04-26 16:10:50.936224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.936240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.936635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.937056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.937111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.937536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.937931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.937970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.938563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.939033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.939082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.939543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.939927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.939967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.940454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.940803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.940844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.941309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.941676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.941691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.942124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.942533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.942572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.943038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.943524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.943562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.944098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.944598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.944635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.945058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.945438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.945475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.945954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.946287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.946301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.946594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.946970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.946986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.947372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.947801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.947817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.948247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.948595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.948611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.949025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.949365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.949381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.949807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.950229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.950245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.950680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.951093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.951109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.951511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.951916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.951931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.952339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.952744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.952760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.953094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.953460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.953476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.953878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.954228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.954244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.954693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.955116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.955132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.955535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.955934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.955949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.956363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.956783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.956798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.957083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.957509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.957524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.539 [2024-04-26 16:10:50.957924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.958323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.539 [2024-04-26 16:10:50.958338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.539 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.958707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.959057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.959076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.959499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.959878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.959893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.960245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.960542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.960558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.960888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.961309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.961324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.961697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.962043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.962058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.962491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.962940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.962954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.963357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.963776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.963791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.964061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.964517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.964533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.964958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.965376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.965392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.965738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.966079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.966094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.966516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.966807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.966822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.967150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.967510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.967525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.967965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.968386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.968401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.968818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.969124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.969138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.969469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.969811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.969830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.970251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.970594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.970609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.971006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.971413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.971428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.971829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.972238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.972253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.972671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.973015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.973029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.973396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.973759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.973775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.974221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.974563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.974578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.974926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.975353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.975368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.975808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.976200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.976215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.976630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.976997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.977012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.977438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.977764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.977780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.978215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.978630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.540 [2024-04-26 16:10:50.978644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.540 qpair failed and we were unable to recover it. 00:28:11.540 [2024-04-26 16:10:50.979041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.979452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.979468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.979859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.980251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.980267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.980682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.981080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.981095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.981506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.981830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.981845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.982193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.982612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.982626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.983043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.983384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.983399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.983731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.984151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.984175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.984567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.984954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.984968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.985390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.985725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.985740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.986153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.986448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.986462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.986891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.987226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.987241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.987602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.987955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.987969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.988318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.988664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.988678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.989075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.989492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.989508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.989902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.990313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.990328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.990742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.991135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.991150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.991437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.991777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.991792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.992140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.992471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.992485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.992901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.993315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.993330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.993742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.994081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.994096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.994449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.994895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.994910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.995300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.995712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.995727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.996064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.996406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.996421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.996812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.997229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.997244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.997664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.998055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.998073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.998508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.998875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.998890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.541 [2024-04-26 16:10:50.999281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.999652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.541 [2024-04-26 16:10:50.999698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.541 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.000168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.000631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.000671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.001148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.001565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.001605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.002067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.002480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.002495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.002848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.003243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.003291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.003629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.004063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.004112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.004519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.004917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.004956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.005447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.005818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.005858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.006323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.006791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.006831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.007307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.007768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.007807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.008228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.008673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.008713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.009207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.009692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.009731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.010231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.010670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.010709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.011100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.011494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.011533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.012033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.012548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.012591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.013040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.013519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.013572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.013998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.014491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.014531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.014903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.015371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.015422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.015798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.016210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.016258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.016730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.017067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.017117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.017564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.017955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.017994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.018463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.018845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.018884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.019366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.019831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.019871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.020345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.020810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.020850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.021342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.021781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.021820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.022185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.022606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.022646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.023123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.023564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.023605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.542 [2024-04-26 16:10:51.024101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.024512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.542 [2024-04-26 16:10:51.024551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.542 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.024982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.025321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.025369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.025862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.026334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.026377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.026818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.027237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.027279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.027750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.028138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.028175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.028601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.029068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.029129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.029617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.030095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.030137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.030499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.030911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.030951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.031349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.031741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.031782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.032211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.032604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.032644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.033061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.033463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.033504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.033888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.034311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.034360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.034719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.035164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.035219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.035595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.035987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.036026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.036382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.036666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.036706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.037178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.037572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.037613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.038095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.038510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.038551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.039021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.039425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.039467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.039856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.040284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.040326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.040694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.041031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.041082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.041538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.041987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.042027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.042492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.043020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.043065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.543 [2024-04-26 16:10:51.043490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.043918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.543 [2024-04-26 16:10:51.043958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.543 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.044383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.044740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.044778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.045268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.045698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.045738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.046217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.046643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.046683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.047101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.047489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.047503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.047848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.048233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.048274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.048666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.049051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.049102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.049522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.049931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.049970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.050382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.050778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.050819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.051254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.051621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.051662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.052148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.052538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.052578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.053003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.053450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.053492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.053897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.054306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.054348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.054785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.055251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.055293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.055711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.056144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.056186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.056615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.057067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.057120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.057532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.057972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.058012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.058378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.058723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.058763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.059343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.059757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.059797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.060287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.060741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.060781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.061274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.061677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.061717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.062144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.062543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.062584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.063084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.063455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.063501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.063935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.064373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.064415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.064831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.065255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.065297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.065635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.066112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.066154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.066598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.067033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.067084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.544 qpair failed and we were unable to recover it. 00:28:11.544 [2024-04-26 16:10:51.067493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.067972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.544 [2024-04-26 16:10:51.068012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.068455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.068800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.068815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.069228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.069628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.069669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.070095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.070502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.070544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.070926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.071217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.071259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.071657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.072054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.072105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.072526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.072972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.073013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.073450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.073850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.073890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.074400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.074796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.074836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.075316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.075747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.075787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.076268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.076738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.076780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.077244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.077672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.077712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.078234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.078658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.078698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.079057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.079479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.079519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.079862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.080319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.080361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.080766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.081256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.081297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.081693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.082091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.082133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.082610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.082970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.083012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.083573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.084002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.084043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.084397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.084823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.084863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.085265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.085685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.085726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.086149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.086622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.086663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.087126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.087575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.087617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.087997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.088391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.088432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.088903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.089366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.089408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.089806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.090267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.090309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.545 qpair failed and we were unable to recover it. 00:28:11.545 [2024-04-26 16:10:51.090799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.545 [2024-04-26 16:10:51.091291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.091343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.091706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.092155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.092197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.092680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.093111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.093153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.093495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.093983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.094025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.094456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.094838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.094879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.095237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.095582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.095623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.096092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.096433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.096474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.096905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.097293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.097337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.097754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.098214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.098256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.098647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.099043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.099112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.099522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.099993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.100033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.100469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.100977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.101017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.101477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.101938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.101977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.102459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.102879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.102919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.103404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.103736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.103777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.104184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.104589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.104629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.104977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.105370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.105412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.105898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.106234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.106276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.106671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.107066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.107117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.107600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.107931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.107946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.108407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.108873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.108914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.109384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.109806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.109821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.110200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.110576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.110591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.110995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.111388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.111429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.111896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.112394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.112436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.112858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.113296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.113312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.113685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.114087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.114104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.546 qpair failed and we were unable to recover it. 00:28:11.546 [2024-04-26 16:10:51.114452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.546 [2024-04-26 16:10:51.114814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.114830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.115261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.115621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.115662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.116151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.116547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.116588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.117009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.117422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.117473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.117896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.118358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.118400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.118932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.119367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.119410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.119815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.120281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.120323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.120769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.121246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.121287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.121645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.122144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.122186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.122710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.123095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.123136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.123567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.123975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.124016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.124469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.124866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.124906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.125385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.125741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.125785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.126213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.126543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.126583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.126997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.127453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.127517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.127945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.128352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.128393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.128754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.129219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.129262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.129669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.130151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.130193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.130674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.131146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.131188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.131668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.132014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.132054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.132421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.132760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.132801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.133307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.133725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.133765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.134175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.134524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.134565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.134959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.135530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.135572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.136086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.136420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.136460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.136806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.137227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.137242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.547 [2024-04-26 16:10:51.137617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.138126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.547 [2024-04-26 16:10:51.138169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.547 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.138606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.139018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.139058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.139442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.139969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.140010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.140467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.140879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.140918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.141357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.141832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.141884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.142244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.142700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.142743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.143168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.143611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.143651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.144173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.144591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.144631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.145126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.145575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.145615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.146106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.146524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.146565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.147066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.147526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.147567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.148088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.148427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.148468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.148824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.149209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.149251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.149653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.150034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.150050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.150382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.150828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.150868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.151323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.151737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.151782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.152196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.152651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.152692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.153085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.153475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.153514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.153955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.154403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.154446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.154815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.155282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.155324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.155754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.156098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.156139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.156531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.156884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.156923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.157347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.157845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.157885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.548 qpair failed and we were unable to recover it. 00:28:11.548 [2024-04-26 16:10:51.158411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.158863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.548 [2024-04-26 16:10:51.158903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.159414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.159918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.159967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.160394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.160735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.160774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.161253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.161645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.161685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.162232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.162654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.162693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.163097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.163440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.163480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.163936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.164286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.164329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.164822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.165227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.165269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.165634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.166111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.166154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.166586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.167164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.167206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.167571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.167904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.167945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.168409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.168891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.168943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.169340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.169827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.169867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.170346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.170702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.170743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.171221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.171672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.171713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.172069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.172529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.172569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.173086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.173426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.173466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.173853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.174229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.174244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.174622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.175016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.175056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.175533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.175929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.175969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.176421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.176801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.176842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.177261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.177614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.177666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.178054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.178483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.178525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.178935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.179364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.179407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.179811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.180286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.180328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.180726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.181155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.181196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.181653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.182090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.182104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.549 qpair failed and we were unable to recover it. 00:28:11.549 [2024-04-26 16:10:51.182414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.182764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.549 [2024-04-26 16:10:51.182804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.183276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.183655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.183695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.184109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.184523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.184563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.184992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.185378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.185420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.185826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.186237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.186285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.186682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.187177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.187219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.187729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.188133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.188174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.188608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.189007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.189047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.189464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.189873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.189914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.190482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.190956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.191006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.191359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.191699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.191739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.192197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.192599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.192639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.193057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.193515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.193564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.194062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.194471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.194511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.194872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.195255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.195297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.195746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.196144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.196186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.196538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.197019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.197066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.197423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.197775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.197815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.198272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.198664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.198704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.199126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.199603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.199643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.200143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.200541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.200581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.201135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.201614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.201655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.202151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.202604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.202644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.203065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.203505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.550 [2024-04-26 16:10:51.203545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.550 qpair failed and we were unable to recover it. 00:28:11.550 [2024-04-26 16:10:51.204096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.204450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.204490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.204987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.205387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.205430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.205789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.206278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.206320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.206762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.207230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.207272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.207685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.208156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.208172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.208566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.208942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.208983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.209388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.209779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.209795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.210249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.210610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.210651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.211050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.211461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.551 [2024-04-26 16:10:51.211502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.551 qpair failed and we were unable to recover it. 00:28:11.551 [2024-04-26 16:10:51.211924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.816 [2024-04-26 16:10:51.212392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.816 [2024-04-26 16:10:51.212409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.816 qpair failed and we were unable to recover it. 00:28:11.816 [2024-04-26 16:10:51.212781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.816 [2024-04-26 16:10:51.213154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.816 [2024-04-26 16:10:51.213170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.213528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.213963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.213979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.214334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.214637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.214678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.215100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.215410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.215451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.215914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.216319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.216362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.216850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.217254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.217296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.217716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.218135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.218176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.218588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.219049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.219110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.219521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.219986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.220027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.220446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.220838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.220878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.221336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.221692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.221732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.222180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.222487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.222528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.222994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.223413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.223455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.223944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.224410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.224452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.224884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.225332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.225374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.225838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.226298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.226340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.226681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.227100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.227143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.227684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.228134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.228151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.228556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.228901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.228915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.229317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.229702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.229717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.230210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.230690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.230731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.231234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.231570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.231587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.231965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.232388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.232404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.232808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.233238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.233254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.233629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.234029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.234045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.234375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.234728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.234743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.235115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.235516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.235532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.817 qpair failed and we were unable to recover it. 00:28:11.817 [2024-04-26 16:10:51.235831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.817 [2024-04-26 16:10:51.236270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.236287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.236571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.236998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.237014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.237373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.237723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.237740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.238158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.238540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.238555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.238963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.239343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.239361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.239741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.240035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.240056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.240473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.240770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.240785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.241181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.241538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.241553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.241860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.242221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.242237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.242574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.242859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.242874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.243218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.243562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.243578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.244001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.244365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.244381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.244687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.245092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.245108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.245407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.245765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.245780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.246277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.246629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.246660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.247127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.247441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.247463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.247795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.248263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.248284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.248590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.249090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.249111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.249529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.249824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.249845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.250280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.250592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.250614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.251161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.251650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.251671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.251973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.252402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.818 [2024-04-26 16:10:51.252424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.818 qpair failed and we were unable to recover it. 00:28:11.818 [2024-04-26 16:10:51.252895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.253285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.253306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.253619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.253927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.253947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.254404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.254854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.254874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.255316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.255621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.255641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.256012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.256371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.256391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.256828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.257234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.257255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.257586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.258053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.258079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.258494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.258846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.258865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.259247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.259555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.259575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.259976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.260356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.260377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.260742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.261178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.261199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.261511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.262036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.262056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.262503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.262964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.262985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.263346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.263707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.263727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.264184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.264503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.264522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.264822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.265263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.265284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.265720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.266089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.266110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.266546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.266916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.266935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.267253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.267614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.267634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.268093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.268408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.268428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.268775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.269137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.269158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.819 [2024-04-26 16:10:51.269569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.269951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.819 [2024-04-26 16:10:51.269971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.819 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.270418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.270817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.270838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.271252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.271616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.271637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.272087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.272396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.272417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.272729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.273143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.273163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.273603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.274032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.274052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.274467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.274910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.274930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.275302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.275658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.275679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.276048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.276365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.276386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.276694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.277151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.277172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.277585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.278051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.278077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.278473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.278831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.278851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.279248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.279558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.279578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.279925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.280286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.280306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.280722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.281087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.281108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.281477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.281825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.281844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.282220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.282611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.282631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.283086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.283525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.283544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.283937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.284387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.284408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.284824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.285189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.285210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.285505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.285979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.285999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.286439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.286767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.286791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.287101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.287528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.820 [2024-04-26 16:10:51.287548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.820 qpair failed and we were unable to recover it. 00:28:11.820 [2024-04-26 16:10:51.287924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.288381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.288402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.288786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.289162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.289183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.289553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.289920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.289940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.290258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.290639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.290659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.291028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.291440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.291461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.291813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.292171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.292191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.292513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.292923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.292943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.293410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.293701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.293719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.294104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.294525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.294548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.294924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.295351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.295372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.295778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.296077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.296097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.296466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.296827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.296847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.297208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.297565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.297585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.298082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.298495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.298515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.298913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.299367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.299387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.299854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.300222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.300241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.300551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.300978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.300997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.301452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.301809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.301829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.302256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.302560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.302584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.821 qpair failed and we were unable to recover it. 00:28:11.821 [2024-04-26 16:10:51.302992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.821 [2024-04-26 16:10:51.303355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.303380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.303735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.304152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.304171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.304480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.304852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.304871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.305171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.305599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.305618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.306129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.306426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.306446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.306871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.307170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.307191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.307494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.307846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.307865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.308325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.308695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.308714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.309154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.309554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.309573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.310003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.310344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.310368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.310721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.311089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.311110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.311480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.311934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.311954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.312332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.312686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.312705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.313149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.313506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.313527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.313886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.314229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.314248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.822 qpair failed and we were unable to recover it. 00:28:11.822 [2024-04-26 16:10:51.314660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.822 [2024-04-26 16:10:51.315083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.315103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.315463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.315820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.315839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.316144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.316482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.316500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.316946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.317495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.317515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.317928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.318386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.318415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.318780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.319229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.319249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.319571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.319924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.319943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.320245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.320599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.320619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.320920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.321218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.321238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.321643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.322154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.322174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.322570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.323006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.323026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.323404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.323702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.323721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.324082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.324435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.324455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.324885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.325245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.325267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.325563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.325994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.326014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.326438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.326729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.326749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.327180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.327549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.327569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.327988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.328385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.328405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.328811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.329222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.329243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.329680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.330156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.330176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.330537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.330826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.330845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.823 qpair failed and we were unable to recover it. 00:28:11.823 [2024-04-26 16:10:51.331203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.823 [2024-04-26 16:10:51.331508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.331528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.331980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.332361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.332381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.332732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.333336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.333356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.333729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.334166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.334186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.334498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.334807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.334827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.335185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.335540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.335559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.335991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.336407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.336426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.336815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.337176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.337196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.337496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.337792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.337812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.338179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.338608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.338628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.339093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.339390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.339410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.339764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.340188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.340222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.340508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.340803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.340823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.341313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.341694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.341714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.342027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.342366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.342387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.342738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.343103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.343123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.343523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.344516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.344556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.345049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.345934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.345971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.346464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.347541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.347579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.824 [2024-04-26 16:10:51.348056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.349165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.824 [2024-04-26 16:10:51.349207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.824 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.349615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.350695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.350734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.351104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.352436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.352472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.352916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.353398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.353422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.353704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.354053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.354068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.354451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.354790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.354805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.355144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.355538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.355579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.355975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.356416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.356459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.356858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.357248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.357290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.357666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.358013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.358027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.358336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.358663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.358678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.358971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.359311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.359327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.359662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.359801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.359815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.360113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.360488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.360529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.360944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.361333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.361371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.361746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.362023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.362063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.363172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.363630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.363649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.363892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.364033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.364048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.364399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.364981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.365024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.825 [2024-04-26 16:10:51.365383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.365646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.825 [2024-04-26 16:10:51.365662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.825 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.366185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.366521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.366545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.367019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.367287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.367303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.367435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.368093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.368122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.368465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.368884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.368924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.369319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.369699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.369738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Write completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Write completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Write completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Write completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Write completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Read completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 Write completed with error (sct=0, sc=8) 00:28:11.826 starting I/O failed 00:28:11.826 [2024-04-26 16:10:51.370576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:11.826 [2024-04-26 16:10:51.370944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.371290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.371317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.371659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.372041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.372066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.372373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.372772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.372790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.373214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.373489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.373507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.373873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.374165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.374184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.374631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.375137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.826 [2024-04-26 16:10:51.375156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.826 qpair failed and we were unable to recover it. 00:28:11.826 [2024-04-26 16:10:51.375517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.375916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.375934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.376247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.376590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.376608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.377009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.377462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.377480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.377728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.378012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.378029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.378483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.378765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.378782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.379007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.379286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.379306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.379740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.380090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.380109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.380458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.380751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.380769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.381126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.381400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.381418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.381937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.382153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.382193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.382539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.382981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.383020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.383346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.383728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.383767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.384108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.384526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.384565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.385048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.385401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.385442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.385900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.386101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.386142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.386542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.386859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.386899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.387213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.387546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.387585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.387904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.388485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.388526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.388943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.389194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.389234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:11.827 qpair failed and we were unable to recover it. 00:28:11.827 [2024-04-26 16:10:51.389667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.827 [2024-04-26 16:10:51.390060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.390114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.390479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.390900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.390939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.391323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.391803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.391842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.392247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.392576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.392614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.392929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.393362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.393402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.393807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.394157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.394197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.394547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.394893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.394931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.395359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.396119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.396176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.396543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.396836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.396854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.397203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.397502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.397520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.397829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.398218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.398237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.398545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.399144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.399194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.399592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.399994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.400032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.400438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.400800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.400838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.401258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.401647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.401685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.402103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.402655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.402694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.403183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.403611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.403649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.404131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.404521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.404558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.404983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.405391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.405430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.405840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.406179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.406219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.406615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.407016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.407054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.407349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.407673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.407712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.828 qpair failed and we were unable to recover it. 00:28:11.828 [2024-04-26 16:10:51.408118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.828 [2024-04-26 16:10:51.408409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.408447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.408943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.409354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.409396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.409820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.410261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.410301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.410714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.411045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.411094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.411499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.411840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.411879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.412414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.412813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.412830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.413286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.413659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.413697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.414224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.414619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.414657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.415105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.415501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.415539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.415982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.416380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.416419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.416754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.417120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.417161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.417518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.418004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.418043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.418480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.418998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.419037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.419478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.419933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.419971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.420372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.420769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.420807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.421302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.421702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.421740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.829 qpair failed and we were unable to recover it. 00:28:11.829 [2024-04-26 16:10:51.421964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.829 [2024-04-26 16:10:51.422417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.422456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.422852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.423233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.423251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.423442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.423842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.423882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.424249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.424570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.424609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.424940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.425376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.425418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.425802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.426199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.426217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.426531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.426898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.426936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.427263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.427574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.427592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.427930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.428339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.428378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.428723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.429111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.429150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.429569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.429969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.430008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.430432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.430756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.430774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.431058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.431483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.431528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.431844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.432232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.432273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.432707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.433095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.433136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.433478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.433878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.433916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.434259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.434662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.434701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.435101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.435488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.435530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.435883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.436180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.436198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.436574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.437053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.437104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.437430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.437764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.437802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.438327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.438612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.438630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.439001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.439413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.439475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.439850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.440158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.440176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.830 [2024-04-26 16:10:51.440536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.440967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.830 [2024-04-26 16:10:51.441024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.830 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.441366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.441679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.441717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.442054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.442450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.442489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.442822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.444043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.444087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.444384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.444678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.444717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.445162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.445545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.445562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.445845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.446268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.446306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.446722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.447112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.447151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.447492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.447864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.447910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.448255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.448644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.448682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.449105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.449590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.449628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.450068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.450419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.450458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.450833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.451128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.451168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.451639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.452099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.452139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.452474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.452994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.453031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.453367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.453847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.453885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.454271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.454588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.454626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.455018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.455417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.455457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.455809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.456135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.456182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.456518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.456832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.456849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.457237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.457618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.457656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.458001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.458317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.458357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.458796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.459178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.459217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.459551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.459937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.459975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.460415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.460911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.460949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.461332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.461731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.461748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.462036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.462370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.462409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.462807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.463119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.463158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.831 qpair failed and we were unable to recover it. 00:28:11.831 [2024-04-26 16:10:51.463479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.463796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.831 [2024-04-26 16:10:51.463834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.464223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.464619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.464657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.465038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.465421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.465461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.465935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.466236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.466254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.466611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.466911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.466950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.467179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.467498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.467536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.467917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.468326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.468365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.468821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.469198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.469238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.469592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.469974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.470012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.470428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.470745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.470783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.471183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.471557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.471595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.472008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.472366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.472406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.472786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.473132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.473171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.473567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.473933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.473971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.474400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.474817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.474855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.475203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.475578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.475617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.476001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.476232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.476271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.476718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.477013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.477051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.477450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.477827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.477864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.478299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.478610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.478648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.478967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.479375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.479413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.479744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.480116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.480161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.480586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.480916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.480954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.481354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.481729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.481746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.482108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.482415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.482432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.482749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.483172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.483190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.483465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.483745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.483763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.484111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.484454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.484472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.484746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.485028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.485045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.832 qpair failed and we were unable to recover it. 00:28:11.832 [2024-04-26 16:10:51.485387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.485717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.832 [2024-04-26 16:10:51.485734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.486083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.486408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.486425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.486709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.487060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.487083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.487363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.487640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.487657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.487951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.488295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.488314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.488594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.488882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.488900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.489243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.489525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.489543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.489921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.490315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.490345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.490688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.490966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.490983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.491254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.491537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.833 [2024-04-26 16:10:51.491555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:11.833 qpair failed and we were unable to recover it. 00:28:11.833 [2024-04-26 16:10:51.491895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.492358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.492376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.492713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.493042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.493059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.493410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.493751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.493769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.494176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.494500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.494517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.494854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.495198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.495215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.495560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.495952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.495969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.496319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.496604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.496621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.496908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.497141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.497158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.497444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.497770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.497788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.098 qpair failed and we were unable to recover it. 00:28:12.098 [2024-04-26 16:10:51.498074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.498369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.098 [2024-04-26 16:10:51.498386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.498747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.499185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.499203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.499481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.499759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.499777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.499983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.500375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.500392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.500739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.501028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.501045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.501396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.501734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.501752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.502084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.502416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.502433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.502778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.503178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.503195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.503566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.503846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.503863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.504214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.504552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.504580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.504966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.505298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.505316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.505651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.506011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.506028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.506334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.506625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.506642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.507001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.507290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.507307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.507707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.508009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.508034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.508400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.508674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.508691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.509066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.509412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.509430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.509777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.510154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.510181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.510477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.510903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.510928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.511346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.511680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.511698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.511979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.512247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.512266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.512599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.512932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.512950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.513350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.513703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.513732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.099 [2024-04-26 16:10:51.514030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.514308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.099 [2024-04-26 16:10:51.514335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.099 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.514710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.515062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.515085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.515488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.515838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.515855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.516268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.516615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.516645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.516970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.517300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.517318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.517607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.517941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.517958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.518382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.518820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.518848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.519231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.519646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.519664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.519957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.520371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.520390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.520803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.521141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.521162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.521516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.521793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.521811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.522151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.522486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.522504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.522780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.523093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.523118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.523543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.523888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.523905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.524203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.524629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.524657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.525029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.525320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.525339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.525616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.526018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.526050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.526474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.526893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.526910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.527258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.527623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.527640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.527926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.528222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.528240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.528641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.528980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.528997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.529358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.529700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.529717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.530000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.530391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.530409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.530691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.531122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.531140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.531546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.531968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.531985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.532329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.532746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.532763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.100 qpair failed and we were unable to recover it. 00:28:12.100 [2024-04-26 16:10:51.533141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.533535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.100 [2024-04-26 16:10:51.533552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.533928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.534319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.534336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.534766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.535156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.535174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.535396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.535679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.535706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.536061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.536425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.536444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.536871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.537225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.537250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.537657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.538048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.538065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.538355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.538747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.538765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.539203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.539568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.539588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.540034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.540481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.540507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.540948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.541365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.541383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.541744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.542119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.542146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.542449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.542797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.542813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.543162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.543505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.543531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.543941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.544336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.544357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.544640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.545028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.545045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.545480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.545756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.545773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.546185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.546629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.546649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.546931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.547345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.547363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.547702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.547965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.547982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.548320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.548650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.548668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.549031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.549382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.549400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.549746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.550142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.550160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.550537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.550888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.550905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.551276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.551565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.551585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.552033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.552386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.552403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.552748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.553088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.553105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.101 qpair failed and we were unable to recover it. 00:28:12.101 [2024-04-26 16:10:51.553523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.101 [2024-04-26 16:10:51.553852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.553869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.554078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.554520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.554538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.554880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.555219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.555237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.555583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.555916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.555933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.556354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.556748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.556766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.557116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.557547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.557567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.557849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.558189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.558207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.558539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.558934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.558964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.559243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.559697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.559716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.560161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.560579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.560627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.561123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.561521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.561560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.561944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.562372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.562411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.562873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.563238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.563278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.563597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.564065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.564139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.564540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.564975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.564992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.565392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.565612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.565649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.566090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.566456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.566494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.566951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.567327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.567373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.567847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.568300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.568339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.568812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.569153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.569193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.569576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.570019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.570057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.570541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.571001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.571040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.571427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.571803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.571841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.572227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.572530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.572567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.572985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.573348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.573387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.573785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.574242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.574286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.102 [2024-04-26 16:10:51.574694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.575124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.102 [2024-04-26 16:10:51.575163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.102 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.575558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.575980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.575997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.576395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.576789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.576806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.577167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.577534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.577578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.577909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.578312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.578330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.578666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.578944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.578964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.579316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.579726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.579764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.580091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.580365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.580383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.580777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.581124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.581163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.581508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.581884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.581922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.582327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.582638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.582677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.583054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.583518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.583557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.583980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.584351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.584390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.584861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.585291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.585336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.585767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.586183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.586213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.586497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.586868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.586906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.587237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.587601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.587639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.587983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.588358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.588399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.588871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.589254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.589293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.589733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.590129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.590168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.590558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.590980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.103 [2024-04-26 16:10:51.590997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.103 qpair failed and we were unable to recover it. 00:28:12.103 [2024-04-26 16:10:51.591337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.591678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.591716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.591899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.592287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.592327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.592814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.593319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.593358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.593800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.594029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.594067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.594549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.594918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.594936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.595307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.595731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.595769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.596108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.596499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.596538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.596932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.597390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.597430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.597829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.598277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.598296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.598719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.599194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.599233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.599566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.599798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.599842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.600208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.600579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.600616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.601091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.601479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.601517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.601930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.602306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.602345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.602784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.603215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.603254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.603688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.604095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.604142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.604486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.604779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.604817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.605196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.605627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.605664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.606040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.606518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.606556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.606933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.607386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.607426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.607931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.608363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.608402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.608863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.609260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.609299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.609741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.610172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.610211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.610587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.610762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.610799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.611253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.611705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.611743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.612229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.612684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.612722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.104 [2024-04-26 16:10:51.613057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.613498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.104 [2024-04-26 16:10:51.613536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.104 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.613984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.614419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.614458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.614861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.615291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.615330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.615703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.616147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.616185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.616591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.616906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.616944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.617387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.617769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.617807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.618158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.618571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.618588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.619026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.619471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.619511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.619952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.620329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.620347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.620774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.621091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.621140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.621497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.621848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.621887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.622305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.622686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.622724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.623166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.623596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.623634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.623869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.624235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.624274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.624692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.625145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.625184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.625662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.626094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.626134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.626482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.626896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.626934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.627341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.627703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.627741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.627998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.628395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.628435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.628899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.629348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.629388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.629827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.630276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.630316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.630755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.631136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.631174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.631522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.631953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.631991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.632370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.632813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.632852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.633282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.633605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.633622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.634044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.634486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.634525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.635001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.635432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.635471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.105 qpair failed and we were unable to recover it. 00:28:12.105 [2024-04-26 16:10:51.635925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.636382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.105 [2024-04-26 16:10:51.636422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.636759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.637216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.637234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.637607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.637995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.638033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.638426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.638801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.638839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.639276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.639622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.639660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.639845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.640204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.640221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.640549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.640916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.640954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.641351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.641788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.641826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.642230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.642635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.642674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.643087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.643665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.643703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.644095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.644492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.644510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.644774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.645173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.645212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.645661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.646024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.646061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.646484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.646860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.646898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.647355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.647729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.647767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.648106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.648549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.648587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.648960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.649284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.649324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.649718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.650089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.650129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.650514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.650800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.650817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.651165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.651529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.651567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.651896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.652278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.652318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.652782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.653172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.653212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.653599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.653921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.653959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.654424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.654748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.654787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.655185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.655635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.655673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.656168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.656559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.656597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.657095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.657549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.657588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.106 qpair failed and we were unable to recover it. 00:28:12.106 [2024-04-26 16:10:51.658005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.106 [2024-04-26 16:10:51.658444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.658484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.658968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.659353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.659393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.659787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.660191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.660230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.660669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.661064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.661114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.661453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.661909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.661946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.662356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.662739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.662777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.663168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.663597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.663635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.664011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.664317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.664335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.664748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.665103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.665142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.665601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.666084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.666124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.666498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.666877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.666915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.667151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.667430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.667448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.667807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.668204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.668245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.668711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.669080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.669117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.669313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.669715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.669753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.670147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.670609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.670648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.671041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.671375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.671393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.671788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.672197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.672236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.672560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.672934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.672972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.673348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.673687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.673704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.674131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.674508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.674546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.674878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.675335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.675380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.675759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.676214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.676254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.676696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.677095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.677134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.677527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.677982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.678019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.678514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.678959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.678998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.679400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.679633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.107 [2024-04-26 16:10:51.679671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.107 qpair failed and we were unable to recover it. 00:28:12.107 [2024-04-26 16:10:51.680044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.680374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.680414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.680877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.681203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.681242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.681656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.682110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.682160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.682487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.682921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.682959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.683342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.683779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.683828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.684235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.684616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.684654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.685116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.685580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.685618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.685995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.686449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.686488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.686873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.687272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.687311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.687759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.688192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.688231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.688628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.689085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.689135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.689504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.689929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.689967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.690430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.690809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.690847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.691315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.691657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.691696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.692103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.692490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.692553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.692935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.693260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.693278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.693699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.694113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.694153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.694478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.694934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.694972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.695412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.695862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.695900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.696367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.696844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.696884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.697261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.697727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.697765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.698150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.698608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.698646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.699038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.699478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.699517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.108 qpair failed and we were unable to recover it. 00:28:12.108 [2024-04-26 16:10:51.699906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.108 [2024-04-26 16:10:51.700351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.700369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.700660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.700993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.701037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.701498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.701954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.701992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.702289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.702653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.702692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.703096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.703499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.703537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.703991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.704448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.704489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.704826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.705279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.705319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.705768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.706032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.706080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.706387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.706788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.706826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.707228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.707654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.707692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.708093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.708494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.708531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.708973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.709426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.709465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.709796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.710255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.710294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.710731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.711107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.711146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.711558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.711946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.711984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.712361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.712687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.712725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.713194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.713623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.713662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.714102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.714476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.714493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.714843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.715147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.715187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.715582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.716012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.716050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.716343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.716783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.716821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.717225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.717590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.717628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.718035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.718477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.718516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.109 [2024-04-26 16:10:51.719014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.719394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.109 [2024-04-26 16:10:51.719434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.109 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.719769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.720178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.720224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.720689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.721065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.721112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.721529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.721960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.721998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.722339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.722767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.722805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.723256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.723456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.723494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.723939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.724266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.724284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.724691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.725039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.725085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.725466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.725845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.725883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.726327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.726691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.726729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.727067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.727403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.727442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.727762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.728160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.728201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.728616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.729046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.729094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.729580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.729962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.729999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.730473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.730923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.730972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.731334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.731734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.731751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.732145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.732427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.732444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.732846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.733193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.733211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.733630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.734045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.734076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.734446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.734842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.734860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.735117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.735549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.735567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.735914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.736214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.736240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.736531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.736942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.736960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.737307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.737727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.737751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.738157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.738493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.738511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.738860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.739204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.739222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.739651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.739992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.110 [2024-04-26 16:10:51.740018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.110 qpair failed and we were unable to recover it. 00:28:12.110 [2024-04-26 16:10:51.740378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.740774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.740800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.741185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.741596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.741613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.741897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.742290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.742308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.742653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.743067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.743098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.743474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.743769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.743794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.744189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.744617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.744635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.745062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.745458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.745475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.745817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.746102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.746128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.746559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.746909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.746928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.747361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.747795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.747815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.748195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.748586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.748603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.748930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.749279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.749314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.749684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.750032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.750050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.750394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.750824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.750849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.751257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.751396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.751413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.751749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.751939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.751956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.752298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.752691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.752708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.753076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.753421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.753439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.753847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.754172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.754190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.754387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.754748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.754765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.755029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.755378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.755395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.755835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.756180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.756198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.756540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.756881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.756898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.757247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.757607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.757624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.757994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.758383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.758401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.758819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.759207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.759225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.111 [2024-04-26 16:10:51.759666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.760055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.111 [2024-04-26 16:10:51.760078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.111 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.760459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.760862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.760879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.761160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.761490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.761508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.761796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.762147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.762174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.762543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.762997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.763023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.763467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.763828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.763846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.764134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.764581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.764607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.765020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.765461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.765488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.765863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.766200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.766222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.766590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.766952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.766973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.767380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.767819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.767835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.768004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.768196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.768215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.768564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.768920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.768940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.769387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.769751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.769768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.770185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.770579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.770596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.770883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.771149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.771167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.771533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.771870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.771887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.772229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.772597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.772614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.772966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.773405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.773423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.773819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.774110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.112 [2024-04-26 16:10:51.774128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.112 qpair failed and we were unable to recover it. 00:28:12.112 [2024-04-26 16:10:51.774471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.374 [2024-04-26 16:10:51.774798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.374 [2024-04-26 16:10:51.774816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.374 qpair failed and we were unable to recover it. 00:28:12.374 [2024-04-26 16:10:51.775159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.374 [2024-04-26 16:10:51.775557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.374 [2024-04-26 16:10:51.775574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.374 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.775842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.776123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.776140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.776473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.776742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.776759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.777190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.777531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.777549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.777951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.778344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.778361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.778713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.779145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.779163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.779582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.779940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.779957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.780353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.780720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.780738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.781174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.781536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.781553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.781895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.782252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.782270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.782638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.783048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.783065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.783411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.783751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.783768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.784131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.784428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.784445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.784791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.785117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.785135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.785418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.785611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.785628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.785984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.786262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.786281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.786555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.786897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.786914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.787272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.787683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.787700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.787968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.788224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.788241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.788663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.789030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.789047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.789409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.789741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.789758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.790094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.790462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.790483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.790834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.791252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.791276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.791625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.792014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.792031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.792393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.792738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.792755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.793171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.793544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.793562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.793905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.794106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.794124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.375 qpair failed and we were unable to recover it. 00:28:12.375 [2024-04-26 16:10:51.794512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.794860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.375 [2024-04-26 16:10:51.794876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.795023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.795292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.795309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.795670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.796057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.796078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.796497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.796900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.796918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.797319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.797656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.797674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.798117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.798474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.798491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.798782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.799130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.799147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.799517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.799908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.799925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.800288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.800680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.800702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.801120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.801394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.801411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.801756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.802084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.802102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.802443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.802723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.802740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.803087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.803359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.803376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.803719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.803909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.803926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.804326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.804743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.804775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.805131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.805531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.805549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.805826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.806192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.806215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.806635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.806985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.807014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.807374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.807769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.807790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.808121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.808537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.808555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.808958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.809300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.809326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.809714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.809985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.810012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.810470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.810843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.810862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.811228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.811642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.811659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.812057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.812485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.812511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.812716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.813018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.813068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.376 [2024-04-26 16:10:51.813478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.813897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.376 [2024-04-26 16:10:51.813936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.376 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.814353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.814521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.814559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.815028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.815443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.815489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.815951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.816403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.816442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.816888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.817160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.817199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.817568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.817931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.817968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.818352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.818736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.818775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.819233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.819611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.819649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.820116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.820484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.820521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.820960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.821339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.821385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.821778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.822203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.822243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.822697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.823024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.823061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.823519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.823893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.823936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.824324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.824771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.824788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.825190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.825569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.825608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.826048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.826456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.826495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.826945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.827309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.827350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.827714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.828106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.828146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.828534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.828979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.828996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.829340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.829823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.829861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.830306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.830763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.830801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.831268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.831723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.831761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.832150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.832552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.832596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.832998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.833237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.833277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.833668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.834041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.834087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.834587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.834826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.834843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.377 [2024-04-26 16:10:51.835125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.835456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.377 [2024-04-26 16:10:51.835495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.377 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.835829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.836209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.836249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.836667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.837107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.837148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.837639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.838099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.838139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.838601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.839062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.839113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.839433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.839815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.839852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.840316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.840711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.840749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.841152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.841495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.841532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.841924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.842296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.842335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.842811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.843258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.843296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.843698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.844128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.844167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.844574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.845046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.845104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.845569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.845933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.845971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.846435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.846824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.846863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.847208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.847589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.847627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.848011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.848383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.848424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.848818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.849214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.849254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.849670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.850127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.850166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.850494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.850934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.850971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.851365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.851782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.851799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.852147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.852564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.852602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.853066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.853541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.853584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.853875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.854244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.854284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.854781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.855180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.855219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.855674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.856062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.856109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.856570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.857023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.857060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.857483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.857917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.857954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.378 qpair failed and we were unable to recover it. 00:28:12.378 [2024-04-26 16:10:51.858352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.858725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.378 [2024-04-26 16:10:51.858742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.859174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.859517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.859534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.859984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.860396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.860435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.860845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.861153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.861194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.861609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.861983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.862021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.862442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.862813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.862830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.863197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.863565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.863582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.863981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.864257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.864275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.864636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.865089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.865127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.865590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.866041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.866086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.866557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.866935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.866972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.867376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.867842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.867859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.868226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.868575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.868612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.868937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.869415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.869455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.869945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.870259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.870299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.870707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.871092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.871132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.871460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.871849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.871887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.872297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.872731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.872769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.873177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.873556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.873593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.874004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.874382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.874421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.874614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.875010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.875048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.875472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.875843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.875881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.379 qpair failed and we were unable to recover it. 00:28:12.379 [2024-04-26 16:10:51.876272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.876682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.379 [2024-04-26 16:10:51.876719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.877164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.877621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.877658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.878039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.878422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.878460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.878894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.879306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.879324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.879746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.880020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.880057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.880496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.880939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.880975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.881435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.881868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.881905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.882237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.882475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.882512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.882912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.883373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.883413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.883860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.884286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.884325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.884722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.885091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.885131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.885469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.885918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.885935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.886356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.886728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.886746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.887170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.887595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.887634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.888008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.888455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.888495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.888822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.889154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.889172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.889459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.889835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.889873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.890343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.890830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.890868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.891269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.891650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.891688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.892094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.892411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.892450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.892747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.893017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.893034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.893372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.893803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.893846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.894309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.894681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.894731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.895122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.895551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.895589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.896060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.896475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.896513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.896890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.897147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.897202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.897608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.897983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.898021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.380 qpair failed and we were unable to recover it. 00:28:12.380 [2024-04-26 16:10:51.898455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.380 [2024-04-26 16:10:51.898862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.898879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.899227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.899622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.899639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.899979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.900420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.900458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.900865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.901220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.901238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.901591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.902005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.902042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.902494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.902878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.902915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.903318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.903773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.903811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.904247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.904701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.904738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.905201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.905591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.905630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.906037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.906447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.906486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.906956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.907335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.907374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.907691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.908037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.908082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.908475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.908908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.908945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.909384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.909786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.909824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.910224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.910658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.910696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.911103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.911480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.911517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.911950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.912316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.912333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.912698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.913083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.913122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.913595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.913971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.914009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.914404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.914836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.914874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.915341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.915727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.915744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.915964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.916298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.916338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.916682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.917122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.917162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.917589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.918039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.918085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.918481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.918933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.918971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.919197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.919578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.919616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.381 [2024-04-26 16:10:51.920067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.920454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.381 [2024-04-26 16:10:51.920492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.381 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.920932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.921253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.921292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.921775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.922157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.922197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.922501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.922877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.922914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.923328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.923797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.923835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.924275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.924641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.924685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.925093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.925492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.925530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.925979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.926358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.926398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.926864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.927204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.927223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.927510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.927895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.927933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.928344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.928806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.928844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.929246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.929679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.929717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.930100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.930542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.930580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.931015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.931454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.931498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.931920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.932324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.932363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.932746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.933120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.933167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.933636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.934011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.934049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.934384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.934798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.934815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.935217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.935612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.935651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.936030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.936357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.936396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.936868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.937155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.937173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.937598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.938056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.938103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.938486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.938916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.938954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.939288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.939654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.939693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.940091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.940544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.940582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.940977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.941431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.941478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.941874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.942330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.942369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.382 qpair failed and we were unable to recover it. 00:28:12.382 [2024-04-26 16:10:51.942749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.943204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.382 [2024-04-26 16:10:51.943244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.943642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.944022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.944061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.944441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.944823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.944861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.945334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.945793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.945831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.946287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.946692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.946730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.947123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.947579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.947617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.948084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.948481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.948519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.948891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.949323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.949363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.949749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.950150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.950196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.950445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.950836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.950854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.951222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.951677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.951715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.952089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.952475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.952513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.952959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.953361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.953402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.953901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.954225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.954264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.954548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.955007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.955045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.955541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.955941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.955957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.956373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.956787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.956805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.957241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.957573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.957611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.957979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.958293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.958332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.958781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.959164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.959203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.959663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.960038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.960085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.960577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.961009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.961046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.961455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.961884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.961923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.962340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.962740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.962778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.963237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.963665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.963703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.964200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.964596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.964633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.965103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.965469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.965506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.383 qpair failed and we were unable to recover it. 00:28:12.383 [2024-04-26 16:10:51.965976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.383 [2024-04-26 16:10:51.966356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.966396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.966721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.967157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.967175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.967576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.968007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.968045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.968440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.968813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.968830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.969176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.969525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.969563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.970023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.970428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.970468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.970925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.971308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.971347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.971787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.972175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.972214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.972597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.973058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.973117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.973424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.973874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.973912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.974386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.974811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.974828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.975225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.975577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.975614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.976010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.976399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.976437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.976844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.977272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.977311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.977662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.978117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.978157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.978615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.978999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.979037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.979529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.979904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.979941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.980354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.980729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.980767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.981237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.981667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.981705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.981998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.982339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.982357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.982619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.983011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.983028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.983320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.983674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.983699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.384 qpair failed and we were unable to recover it. 00:28:12.384 [2024-04-26 16:10:51.984065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.384 [2024-04-26 16:10:51.984418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.984439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.984884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.985231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.985250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.985626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.985973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.986003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.986363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.986796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.986815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.987112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.987453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.987479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.987835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.988206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.988234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.988654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.988951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.988971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.989396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.989797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.989814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.990180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.990480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.990499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.990843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.991232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.991250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.991598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.992011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.992028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.992244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.992659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.992678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.993113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.993465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.993481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.993889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.994292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.994307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.994645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.995064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.995082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.995402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.995731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.995743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.996084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.996397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.996410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.996728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.997062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.997078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.997511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.997840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.997853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.998207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.998588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.998600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.998866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.999183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.999196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:51.999578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.999978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:51.999991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:52.000356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.000631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.000644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:52.001048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.001431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.001444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:52.001794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.002123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.002136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:52.002457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.002813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.002825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:52.003106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.003538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.385 [2024-04-26 16:10:52.003550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.385 qpair failed and we were unable to recover it. 00:28:12.385 [2024-04-26 16:10:52.003947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.004366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.004379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.004784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.005127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.005141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.005468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.005850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.005863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.006249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.006690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.006712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.007499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.007951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.007976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.008436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.008795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.008816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.009180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.009531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.009551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000030040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.009834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.010095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.010108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.010399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.010738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.010751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.011134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.011486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.011499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.011857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.012172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.012186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.012607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.013057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.013078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.013463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.013816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.013829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.014208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.014606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.014625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.015002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.015366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.015385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.015793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.016139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.016159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.016503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.016764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.016782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.017124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.017538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.017556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.017911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.018251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.018269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.018615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.019032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.019050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.019451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.019842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.019860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.020212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.020555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.020573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.021000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.021277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.021295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.021639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.021936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.021954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.022377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.022721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.022739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.023152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.023430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.023447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.386 [2024-04-26 16:10:52.023869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.024263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.386 [2024-04-26 16:10:52.024281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.386 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.024622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.024945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.024963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.025261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.025622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.025640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.026006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.026344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.026363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.026768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.027096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.027115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.027399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.027656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.027673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.027884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.028171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.028190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.028621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.028955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.028972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.029337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.029746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.029763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.030101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.030438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.030455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.030724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.031165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.031184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.031516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.031842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.031860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.032264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.032627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.032645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.033073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.033409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.033427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.033835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.034161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.034179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.034534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.034928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.034946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.035295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.035691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.035708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.036108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.036524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.036542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.036804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.037132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.037150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.037556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.037828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.037846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.038125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.038542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.038560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.038837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.039204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.039222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.039514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.039859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.039876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.040252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.040668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.040686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.041049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.041359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.041386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.041795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.042213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.042269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.042697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.043051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.043099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.387 qpair failed and we were unable to recover it. 00:28:12.387 [2024-04-26 16:10:52.043477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.043866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.387 [2024-04-26 16:10:52.043907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.044308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.044715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.044755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.045235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.045667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.045705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.046016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.046429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.046468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.046856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.047312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.047331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.047638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.047979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.048018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.048892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.049299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.049320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.049742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.050158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.050178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.050522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.050849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.050867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.051209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.051617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.051635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.051983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.052320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.052342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.388 [2024-04-26 16:10:52.052692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.053052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.388 [2024-04-26 16:10:52.053083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.388 qpair failed and we were unable to recover it. 00:28:12.653 [2024-04-26 16:10:52.053497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.653 [2024-04-26 16:10:52.053785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.653 [2024-04-26 16:10:52.053803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.653 qpair failed and we were unable to recover it. 00:28:12.653 [2024-04-26 16:10:52.054171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.653 [2024-04-26 16:10:52.054457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.653 [2024-04-26 16:10:52.054477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.653 qpair failed and we were unable to recover it. 00:28:12.653 [2024-04-26 16:10:52.054766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.653 [2024-04-26 16:10:52.055135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.653 [2024-04-26 16:10:52.055170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.653 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.055529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.055868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.055886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.056283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.056557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.056575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.056922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.057276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.057317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.057772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.058123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.058166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.058553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.058873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.058911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.059379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.059752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.059773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.060107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.060475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.060514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.060955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.061326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.061366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.061833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.062210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.062252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.062584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.062965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.063003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.063231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.063514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.063553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.063928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.064357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.064397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.064783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.065161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.065201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.065593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.065959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.065998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.066458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.066778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.066817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.067287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.067679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.067725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.068197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.068585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.068624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.069094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.069526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.069566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.069950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.070343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.070384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.070798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.071250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.071292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.071730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.072113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.072161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.072479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.072884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.072903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.073193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.073588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.073606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.073950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.074228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.074247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.074586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.074939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.074956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.075353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.075698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.075719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.076136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.076461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.076500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.076959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.077331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.077372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.077788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.078177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.078218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.078548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.078886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.078925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.079256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.079662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.079701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.080142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.080428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.080446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.080846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.081205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.081246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.081712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.082112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.082153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.082606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.083048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.083108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.083505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.083903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.083942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.084317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.084654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.084673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.085114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.085428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.085467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.085874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.086257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.086299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.086685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.086981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.086999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.087267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.087636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.087654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.088052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.088276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.088318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.088806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.089135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.089177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.089571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.090023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.090062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.090470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.090744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.090762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.091097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.091394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.654 [2024-04-26 16:10:52.091433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.654 qpair failed and we were unable to recover it. 00:28:12.654 [2024-04-26 16:10:52.091885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.092342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.092383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.092716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.093099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.093141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.093554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.093920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.093958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.094391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.094649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.094688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.095016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.095504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.095546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.095932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.096334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.096376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.096787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.097092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.097131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.097523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.097917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.097955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.098428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.098814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.098852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.099239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.099627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.099666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.100136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.100551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.100591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.101055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.101525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.101565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.101888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.102199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.102239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.102704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.103135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.103174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.103489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.103758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.103776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.104134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.104587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.104626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.105041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.105426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.105465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.105762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.106066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.106120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.106623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.106999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.107037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.107386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.107791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.107829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.108079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.108379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.108418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.108800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.109120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.109160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.109627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.110090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.110130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.110523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.110984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.111022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.111421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.111880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.111919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.112349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.112781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.112885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.113268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.113698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.113737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.114203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.114577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.114594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.114879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.115229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.115268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.115709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.116085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.116125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.116588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.116977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.116995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.117415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.117819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.117858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.118260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.118706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.118723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.119148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.119464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.119503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.119968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.120447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.120487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.120873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.121296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.121315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.121595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.122016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.122054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.122521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.122980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.123018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.123433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.123861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.123899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.124291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.124724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.124762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.125096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.125473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.125512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.125910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.126304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.126322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.126611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.127029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.127067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.127542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.127856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.127895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.655 qpair failed and we were unable to recover it. 00:28:12.655 [2024-04-26 16:10:52.128340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.655 [2024-04-26 16:10:52.128799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.128838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.129226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.129537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.129555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.129977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.130308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.130348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.130753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.131128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.131167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.131513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.131846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.131864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.132208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.132648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.132686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.133108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.133496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.133535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.133934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.134227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.134267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.134750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.135204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.135244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.135697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.136151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.136192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.136528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.136957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.136995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.137409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.137775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.137814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.138257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.138657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.138696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.139090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.139543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.139561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.139957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.140302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.140342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.140784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.141108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.141147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.141625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.142089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.142128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.142511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.142875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.142913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.143290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.143743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.143782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.144250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.144643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.144682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.145177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.145612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.145651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.146113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.146537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.146555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.146980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.147354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.147395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.147889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.148353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.148394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.148888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.149365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.149405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.149779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.150121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.150162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.150557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.150997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.151035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.151431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.151805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.151844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.152314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.152751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.152768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.153116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.153506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.153544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.153947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.154314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.154354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.154761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.155190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.155230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.155572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.155972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.156011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.156427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.156827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.156866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.157331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.157722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.157761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.158091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.158524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.158563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.158985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.159358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.159399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.656 [2024-04-26 16:10:52.159772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.160239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.656 [2024-04-26 16:10:52.160296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.656 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.160598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.160892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.160936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.161258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.161639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.161678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.161995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.162348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.162395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.162636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.163023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.163061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.163567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.164055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.164103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.164564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.165007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.165045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.165507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.165946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.165985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.166222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.166571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.166609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.167008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.167481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.167521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.167926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.168302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.168343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.168734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.169075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.169094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.169512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.169887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.169926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.170335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.170650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.170668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.171076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.171432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.171450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.171825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.172198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.172238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.172698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.173102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.173142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.173544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.174005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.174043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.174517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.174899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.174938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.175436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.175819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.175859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.176246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.176698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.176737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.177144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.177609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.177647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.178030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.178464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.178503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.178972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.179347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.179387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.179803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.180105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.180144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.180528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.180928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.180967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.181371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.181691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.181730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.182111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.182542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.182581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.182984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.183414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.183433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.183709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.184100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.184120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.184396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.184778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.184816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.185292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.185745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.185785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.186227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.186704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.186743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.187118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.187513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.187551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.187995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.188448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.188488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.188944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.189422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.189462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.189854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.190306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.190345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.190737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.191193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.191234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.191635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.192017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.192055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.192400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.192817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.192862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.193286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.193661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.193678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.194107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.194438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.194477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.194872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.195237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.195277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.195702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.196160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.196201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.657 qpair failed and we were unable to recover it. 00:28:12.657 [2024-04-26 16:10:52.196512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.657 [2024-04-26 16:10:52.196918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.196956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.197397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.197853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.197892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.198284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.198733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.198751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.199083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.199546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.199585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.199957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.200285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.200325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.200707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.201105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.201152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.201540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.201967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.202006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.202467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.202699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.202738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.203066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.203448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.203487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.203900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.204326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.204366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.204791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.205079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.205119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.205528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.205974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.206012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.206326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.206708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.206746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.207190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.207596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.207635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.208022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.208412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.208451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.208908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.209240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.209261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.209628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2605524 Killed "${NVMF_APP[@]}" "$@" 00:28:12.658 [2024-04-26 16:10:52.209930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.209968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.210423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 16:10:52 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:28:12.658 [2024-04-26 16:10:52.210816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.210834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 16:10:52 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:12.658 [2024-04-26 16:10:52.211189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 16:10:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:12.658 16:10:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:12.658 [2024-04-26 16:10:52.211599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.211617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 16:10:52 -- common/autotest_common.sh@10 -- # set +x 00:28:12.658 [2024-04-26 16:10:52.211977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.212271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.212293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.212728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.213119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.213137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.213556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.213896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.213914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.214260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.214601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.214619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.215018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.215437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.215455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.215813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.216224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.216247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.216582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.216856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.216873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.217269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.217599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.217617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 16:10:52 -- nvmf/common.sh@470 -- # nvmfpid=2606245 00:28:12.658 16:10:52 -- nvmf/common.sh@471 -- # waitforlisten 2606245 00:28:12.658 [2024-04-26 16:10:52.218061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 16:10:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:12.658 16:10:52 -- common/autotest_common.sh@817 -- # '[' -z 2606245 ']' 00:28:12.658 [2024-04-26 16:10:52.218488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.218507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 16:10:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.658 [2024-04-26 16:10:52.218701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 16:10:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:12.658 [2024-04-26 16:10:52.219076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.219101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 16:10:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.658 16:10:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:12.658 [2024-04-26 16:10:52.219523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 16:10:52 -- common/autotest_common.sh@10 -- # set +x 00:28:12.658 [2024-04-26 16:10:52.219810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.219828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.220171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.220526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.220544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.220809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.221160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.221179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.221531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.221801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.221822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.222164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.222509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.222528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.222947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.223288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.223306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.223587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.223978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.223996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.224274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.224612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.224629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.224976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.225333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.658 [2024-04-26 16:10:52.225351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.658 qpair failed and we were unable to recover it. 00:28:12.658 [2024-04-26 16:10:52.225712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.226110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.226128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.226557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.226884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.226902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.227326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.227598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.227616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.228012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.228357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.228375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.228746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.229103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.229124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.229498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.229938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.229956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.230387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.230798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.230816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.231105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.231521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.231539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.231877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.232237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.232255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.232618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.232977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.232995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.233330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.233694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.233712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.234081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.234422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.234440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.234788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.235049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.235084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.235456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.235735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.235752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.236150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.236556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.236582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.236939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.237365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.237386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.237720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.238083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.238101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.238491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.238869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.238895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.239187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.239467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.239498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.239819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.240166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.240185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.240652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.241110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.241129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.241480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.241814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.241827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.242234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.242570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.242583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.243086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.243435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.243451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.243877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.244191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.244204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.244486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.244867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.244879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.245587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.245960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.245975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.246320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.246668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.246682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.247093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.247359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.247372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.247711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.247908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.247921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.248252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.248662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.248675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.249088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.249427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.249452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.249820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.250095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.250110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.250462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.250846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.250858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.251250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.251497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.251520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.251892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.252222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.252237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.252582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.253043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.253057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.253403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.253788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.253800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.254231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.254621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.254634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.659 qpair failed and we were unable to recover it. 00:28:12.659 [2024-04-26 16:10:52.254996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.659 [2024-04-26 16:10:52.255332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.255345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.255677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.256013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.256026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.256436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.256829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.256842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.257205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.257585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.257598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.258005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.258383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.258396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.258810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.259083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.259097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.259377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.259785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.259797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.260080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.260430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.260443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.260760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.261170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.261184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.261524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.261889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.261901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.262232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.262562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.262574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.262981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.263366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.263380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.263718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.264048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.264060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.264473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.264801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.264814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.265068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.265457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.265470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.265821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.266242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.266255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.266670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.266989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.267003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.267342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.267751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.267764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.268181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.268533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.268547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.268668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.269049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.269062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.269457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.269792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.269805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.270205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.270612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.270625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.270907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.271330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.271344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.271755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.272081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.272095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.272422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.272831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.272844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.273177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.273505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.273518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.273908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.274262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.274276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.274710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.275044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.275057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.275467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.275742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.275756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.276170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.276591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.276605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.276938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.277262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.277275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.277627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.278019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.278032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.278163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.278512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.278528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.278938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.279253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.279267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.279604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.280013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.280026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.280368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.280657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.280670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.281090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.281379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.281392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.281732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.282136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.282150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.282470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.282791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.282804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.283144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.283478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.283491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.283914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.284245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.284258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.284528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.284796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.284810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.285148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.285510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.285522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.285814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.286161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.286176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.660 [2024-04-26 16:10:52.286505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.286887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.660 [2024-04-26 16:10:52.286916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.660 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.287212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.287597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.287611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.288042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.288439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.288453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.288863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.289194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.289208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.289489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.289821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.289834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.290180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.290446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.290459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.290818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.291158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.291172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.291441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.291874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.291887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.292294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.292636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.292649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.292987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.293247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.293261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.293672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.293855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.293868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.294186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.294421] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:12.661 [2024-04-26 16:10:52.294497] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.661 [2024-04-26 16:10:52.294572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.294585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.294917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.295316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.295331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.295603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.295993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.296006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.296348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.296615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.296627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.296959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.297312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.297326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.297655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.298041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.298054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.298328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.298660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.298673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.299004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.299385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.299398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.299812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.300128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.300141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.300481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.300813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.300827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.301169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.301497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.301515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.301866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.302303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.302317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.302722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.303050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.303062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.303457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.303774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.303787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.304077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.304425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.304438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.304827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.305165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.305179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.305585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.305900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.305913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.306208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.306485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.306499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.306834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.307245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.307259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.307544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.307880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.307893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.308215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.308595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.308610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.308995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.309335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.309349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.309701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.310020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.310033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.310368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.310777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.310790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.311175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.311581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.311594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.311795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.312201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.312215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.312536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.312862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.312875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.313151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.313495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.313507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.313793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.314175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.314190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.314600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.315035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.315048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.315438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.315781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.315798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.316187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.316595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.316608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.316930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.317345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.317359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.317640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.318004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.318017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.318309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.318694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.318707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.661 qpair failed and we were unable to recover it. 00:28:12.661 [2024-04-26 16:10:52.319066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.661 [2024-04-26 16:10:52.319455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.319468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.319800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.320115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.320128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.320274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.320634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.320647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.320768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.321091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.321105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.321491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.321755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.321767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.322097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.322505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.322520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.322880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.323212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.323225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.323579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.323855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.323868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.324142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.324461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.324474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.324884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.325225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.325239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.325652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.325988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.326002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.326422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.326755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.326768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.327154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.327481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.327495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.327910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.328238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.328251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.328665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.329047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.662 [2024-04-26 16:10:52.329060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:12.662 qpair failed and we were unable to recover it. 00:28:12.662 [2024-04-26 16:10:52.329385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.329770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.329783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.092 qpair failed and we were unable to recover it. 00:28:13.092 [2024-04-26 16:10:52.330202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.330545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.330559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.092 qpair failed and we were unable to recover it. 00:28:13.092 [2024-04-26 16:10:52.330834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.331125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.331138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.092 qpair failed and we were unable to recover it. 00:28:13.092 [2024-04-26 16:10:52.331555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.331683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.331696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.092 qpair failed and we were unable to recover it. 00:28:13.092 [2024-04-26 16:10:52.332084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.332494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.332507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.092 qpair failed and we were unable to recover it. 00:28:13.092 [2024-04-26 16:10:52.332918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.333267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.333280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.092 qpair failed and we were unable to recover it. 00:28:13.092 [2024-04-26 16:10:52.333707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.334043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.334055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.092 qpair failed and we were unable to recover it. 00:28:13.092 [2024-04-26 16:10:52.334472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.092 [2024-04-26 16:10:52.334749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.334762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.335175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.335531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.335543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.335824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.336152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.336165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.336525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.336908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.336921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.337195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.337377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.337390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.337733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.338062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.338080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.338599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.338945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.338958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.339393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.339726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.339739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.340076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.340460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.340474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.340796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.341201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.341214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.341615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.341913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.341925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.342265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.342674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.342687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.343023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.343478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.343492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.343909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.344255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.344268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.344626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.344955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.344968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.345352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.345628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.345642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.346100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.346368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.346381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.346709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.347032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.347045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.347381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.347765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.347779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.348134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.348539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.348551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.348953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.349231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.349245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.349646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.350029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.350042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.350371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.350713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.350726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.351019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.351280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.351293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.351694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.352013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.352026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.352308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.352493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.352506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.352835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.353169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.353182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.353501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.353868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.353881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.354215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.354623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.354636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.355022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.355354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.355367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.355705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.356045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.356057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.356334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.356685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.356698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.093 [2024-04-26 16:10:52.357016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.357407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.357420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.357824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.358206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.358220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.358356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.358633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.358646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.359067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.359411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.359424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.359691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.360002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.360015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.360382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.360700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.360713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.360902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.361283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.361297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.361703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.362032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.362044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.362449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.362775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.362788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.363173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.363556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.363569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.363983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.364115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.364128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.093 qpair failed and we were unable to recover it. 00:28:13.093 [2024-04-26 16:10:52.364514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.093 [2024-04-26 16:10:52.364732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.364744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.365065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.365415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.365428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.365695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.366098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.366111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.366464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.366849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.366861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.367205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.367470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.367482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.367803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.368206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.368219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.368634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.369027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.369039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.369226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.369636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.369648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.369780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.370113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.370126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.370366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.370728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.370741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.371137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.371471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.371484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.371877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.372286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.372299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.372576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.372981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.372993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.373389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.373788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.373800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.374151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.374424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.374436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.374858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.375260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.375273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.375599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.376015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.376030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.376447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.376764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.376782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.377102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.377484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.377497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.377833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.378159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.378172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.378515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.378858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.378871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.379255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.379660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.379673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.380012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.380328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.380341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.380672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.380942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.380954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.381241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.381425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.381437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.381765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.382096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.382109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.382467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.382797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.382810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.383196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.383535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.383548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.383871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.384270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.384283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.384627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.384808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.384820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.385205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.385529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.385543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.385961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.386365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.386378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.386731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.387048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.387060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.387411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.387796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.387808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.388065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.388473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.388486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.388748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.389025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.389038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.389426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.389829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.389841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.390280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.390603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.390615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.390946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.391295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.391307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.391627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.391958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.391970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.392314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.392670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.392682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.393019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.393366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.393378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.393803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.394144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.394157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.394437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.394846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.394858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.094 qpair failed and we were unable to recover it. 00:28:13.094 [2024-04-26 16:10:52.395159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.094 [2024-04-26 16:10:52.395474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.395486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.395873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.396256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.396269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.396678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.397032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.397044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.397433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.397765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.397777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.398042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.398478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.398491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.398771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.399099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.399111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.399434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.399840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.399853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.400181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.400460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.400473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.400809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.401213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.401226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.401588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.401909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.401922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.402240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.402670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.402682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.403015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.403422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.403434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.403775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.404157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.404169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.404533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.404935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.404947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.405360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.405743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.405756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.406121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.406532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.406545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.406827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.407096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.407109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.407431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.407710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.407723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.408012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.408281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.408294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.408678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.409080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.409093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.409370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.409700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.409713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.410099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.410359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.410372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.410689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.410871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.410884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.411205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.411605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.411618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.411956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.412343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.412357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.412743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.413082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.413095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.413423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.413700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.413712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.414121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.414456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.414469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.414803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.415162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.415175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.415511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.415773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.415786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.416117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.416436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.416449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.416605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.417000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.417013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.417206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.417473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.417485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.095 qpair failed and we were unable to recover it. 00:28:13.095 [2024-04-26 16:10:52.417872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.095 [2024-04-26 16:10:52.418222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.418249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.418584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.419013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.419026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.419436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.419755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.419768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.420188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.420517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.420530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.420797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.421137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.421150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.421489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.421820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.421833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.422162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.422542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.422555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.422897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.423306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.423320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.423643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.423972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.423984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.423997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.096 [2024-04-26 16:10:52.424313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.424694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.424706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.425037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.425396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.425409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.425821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.426157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.426171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.426556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.426877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.426889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.427220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.427555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.427568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.427957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.428364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.428377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.428535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.428803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.428828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.429068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.429389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.429404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.429795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.430130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.430147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.430559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.430888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.430903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.431307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.431581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.431594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.096 qpair failed and we were unable to recover it. 00:28:13.096 [2024-04-26 16:10:52.431980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.096 [2024-04-26 16:10:52.432382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.432397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.432797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.433135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.433149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.433561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.433956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.433969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.434308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.434567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.434581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.434997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.435344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.435358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.435734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.436122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.436137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.436421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.436674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.436688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.437013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.437421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.437437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.437793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.438065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.438084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.438427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.438757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.438770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.439110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.439462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.439475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.439824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.440206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.440219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.440484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.440865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.440877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.441152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.441481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.441494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.441765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.442171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.442184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.442525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.442917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.442929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.443123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.443473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.443486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.443896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.444238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.444251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.444438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.444615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.444628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.444969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.445287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.445300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.445705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.446040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.446052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.446462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.446588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.446600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.447009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.447345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.447358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.447767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.448161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.448174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.448456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.448811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.448826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.449162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.449480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.449493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.449828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.450142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.450155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.097 qpair failed and we were unable to recover it. 00:28:13.097 [2024-04-26 16:10:52.450342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.097 [2024-04-26 16:10:52.450673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.450684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.450984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.451367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.451380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.451699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.452028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.452040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.452449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.452725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.452738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.453058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.453433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.453446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.453829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.454163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.454176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.454538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.454870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.454882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.455075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.455479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.455493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.455682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.456081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.456094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.456379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.456696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.456709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.456998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.457399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.457411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.457677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.458003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.458015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.458361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.458677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.458690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.459081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.459424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.459436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.459836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.460228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.460241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.460526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.460856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.460869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.461266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.461659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.461671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.462057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.462345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.462360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.462697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.462963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.462975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.463361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.463629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.463641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.464017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.464409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.464425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.464815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.465136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.465154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.465562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.465839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.465852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.466192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.466336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.466349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.466556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.466963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.466975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.467329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.467607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.467620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.467922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.468342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.468355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.468690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.468970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.468985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.469375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.469552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.469565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.098 qpair failed and we were unable to recover it. 00:28:13.098 [2024-04-26 16:10:52.469898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.098 [2024-04-26 16:10:52.470309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.470322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.470648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.471049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.471062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.471404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.471857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.471870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.472207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.472573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.472584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.472872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.473109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.473121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.473461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.473696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.473709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.474038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.474448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.474460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.474597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.474804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.474816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.475205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.475575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.475587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.475974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.476362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.476376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.476646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.476917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.476930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.477350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.477536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.477548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.477735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.478020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.478031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.478353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.478669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.478681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.479006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.479386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.479399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.479676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.480018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.480030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.480372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.480551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.480563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.480999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.481273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.481286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.481672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.481984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.481996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.482335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.482583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.482596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.482930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.483312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.483325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.483708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.484111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.484125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.484407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.484671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.484683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.484950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.485286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.485299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.485709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.486099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.486112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.486513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.486858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.486871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.487211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.487476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.487488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.487630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.487893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.487905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.488221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.488602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.488614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.099 qpair failed and we were unable to recover it. 00:28:13.099 [2024-04-26 16:10:52.488952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.099 [2024-04-26 16:10:52.489337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.489350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.489589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.489933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.489945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.490357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.490539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.490551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.490890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.491274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.491287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.491615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.491930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.491942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.492375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.492715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.492728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.493060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.493498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.493510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.493832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.494164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.494177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.494516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.494866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.494878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.495289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.495696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.495709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.495979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.496361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.496374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.496691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.497007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.497020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.497407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.497735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.497748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.498136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.498452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.498464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.498871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.499055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.499068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.499348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.499754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.499766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.500151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.500466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.500478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.500819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.501137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.501149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.501484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.501798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.501810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.502162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.502492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.502505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.502836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.503175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.503188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.503475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.503753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.503765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.504177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.504332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.504344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.504530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.504853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.504865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.505286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.505690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.100 [2024-04-26 16:10:52.505702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.100 qpair failed and we were unable to recover it. 00:28:13.100 [2024-04-26 16:10:52.506045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.506377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.506390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.506735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.507142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.507154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.507493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.507834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.507846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.508183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.508571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.508583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.508965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.509370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.509383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.509726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.510012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.510024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.510306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.510619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.510632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.510898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.511178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.511191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.511580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.511860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.511873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.512233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.512516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.512528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.512893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.513166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.513178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.513471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.513797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.513810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.514144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.514478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.514491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.514899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.515235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.515249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.515803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.516218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.516232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.516572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.517026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.517039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.517370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.517752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.517765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.518113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.518466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.518478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.518761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.518945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.518957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.519369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.519695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.519708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.520042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.520311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.520324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.520583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.520937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.520949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.521225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.521414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.521426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.521926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.522243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.522257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.522541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.522876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.522888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.523179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.523540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.523552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.523939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.524273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.524287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.524566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.524907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.524920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.525114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.525433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.101 [2024-04-26 16:10:52.525446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.101 qpair failed and we were unable to recover it. 00:28:13.101 [2024-04-26 16:10:52.525712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.526094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.526106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.526230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.526608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.526620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.527035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.527438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.527451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.527742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.528017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.528030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.528363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.528692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.528715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.529099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.529359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.529371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.529711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.530082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.530095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.530366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.530618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.530630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.530994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.531250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.531264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.531532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.531850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.531862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.532185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.532522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.532535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.532883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.533244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.533256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.533662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.533945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.533958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.534284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.534682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.534694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.535103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.535365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.535378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.535671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.535949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.535962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.536408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.536755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.536767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.537156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.537418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.537431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.537767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.538041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.538055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.538492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.538886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.538899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.539231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.539511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.539524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.539708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.539972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.539984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.540381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.540768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.540782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.541165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.541547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.541561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.541897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.542090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.542103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.542421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.542781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.542794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.543189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.543514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.543528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.543803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.544080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.544095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.544512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.544895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.544910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.545308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.545637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.102 [2024-04-26 16:10:52.545651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.102 qpair failed and we were unable to recover it. 00:28:13.102 [2024-04-26 16:10:52.545980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.546376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.546389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.546711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.547064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.547084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.547417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.547717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.547732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.548121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.548428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.548441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.548724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.549040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.549055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.549361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.549698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.549711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.550131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.550465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.550478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.550757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.551091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.551109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.551381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.551709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.551721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.552006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.552385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.552398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.552670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.553053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.553066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.553339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.553677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.553690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.553957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.554297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.554310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.554641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.554963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.554976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.555309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.555580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.555592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.555977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.556251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.556264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.556675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.556994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.557009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.557454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.557708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.557720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.558050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.558330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.558343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.558605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.558885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.558897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.559239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.559562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.559575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.559867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.560135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.560148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.560539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.560856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.560868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.561143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.561481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.561493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.561833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.562165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.562178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.562454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.562808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.562821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.563101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.563643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.563658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.563920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.564251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.564264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.564709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.565090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.565103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.565467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.565790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.565802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.566087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.566367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.566379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.103 qpair failed and we were unable to recover it. 00:28:13.103 [2024-04-26 16:10:52.566771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.103 [2024-04-26 16:10:52.567112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.567126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.567470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.567873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.567886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.568010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.568413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.568427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.568699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.569112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.569126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.569468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.569740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.569754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.570145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.570414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.570429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.570759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.571086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.571099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.571430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.571761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.571774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.572123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.572519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.572531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.572849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.573115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.573129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.573411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.573669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.573681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.573958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.574233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.574247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.574443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.574777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.574790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.575119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.575520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.575532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.575797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.576157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.576169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.576496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.576881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.576899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.577293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.577568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.577581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.577901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.578245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.578258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.578548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.578819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.578831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.579113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.579577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.579591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.579864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.580127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.580141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.580483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.580819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.580832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.581022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.581352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.581366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.581635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.581988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.582000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.582274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.582550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.582563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.582943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.583276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.583290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.583679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.584030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.584042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.584385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.584726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.584739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.585081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.585300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.585312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.585665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.585847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.585860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.586197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.586522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.586534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.104 qpair failed and we were unable to recover it. 00:28:13.104 [2024-04-26 16:10:52.586858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.104 [2024-04-26 16:10:52.587186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.587199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.587614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.588019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.588031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.588310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.588695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.588708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.589044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.589312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.589325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.589739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.590127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.590140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.590490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.590762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.590775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.591029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.591205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.591218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.591486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.591823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.591836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.592191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.592525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.592537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.592922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.593185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.593197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.593457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.593599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.593611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.593943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.594327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.594341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.594726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.595043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.595055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.595413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.595729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.595742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.596050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.596329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.596342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.596617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.596956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.596969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.597344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.597766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.597778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.598123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.598585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.598597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.598881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.599235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.599248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.599517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.599783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.599796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.600212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.600539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.600551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.600949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.601299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.601312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.601659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.602134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.602147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.602505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.602884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.602897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.603191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.603535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.603549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.603897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.604231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.604244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.604578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.604948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.604961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.605290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.605648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.605661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.105 [2024-04-26 16:10:52.606023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.606376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.105 [2024-04-26 16:10:52.606389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.105 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.606725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.607128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.607141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.607489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.607767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.607779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.608214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.608554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.608567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.608929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.609369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.609382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.609673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.610080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.610094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.610501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.610900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.610914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.611389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.611664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.611676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.612018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.612411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.612424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.612760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.613165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.613178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.613515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.613804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.613816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.614230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.614636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.614649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.615040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.615395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.615409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.615701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.616111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.616125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.616415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.616632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.616644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.616946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.617264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.617278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.617671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.618102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.618114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.618337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.618726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.618738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.619156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.619497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.619509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.619850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.620131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.620145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.620488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.620899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.620912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.621320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.621639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.621651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.621958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.622362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.622376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.622782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.623178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.623192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.623479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.623880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.623893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.624245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.624526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.624539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.624867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.625197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.625211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.625499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.625914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.625926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.626273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.626667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.626679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.627012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.627338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.627352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.627736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.628078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.628092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.628431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.628878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.628891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.629238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.629563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.629576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.629944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.630291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.106 [2024-04-26 16:10:52.630305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.106 qpair failed and we were unable to recover it. 00:28:13.106 [2024-04-26 16:10:52.630639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.631062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.631086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.631387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.631694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.631707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.632050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.632344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.632357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.632645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.632913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.632925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.633340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.633615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.633628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.634084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.634504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.634517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.634859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.635186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.635199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.635539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.635816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.635828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.636166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.636443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.636456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.636820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.637175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.637188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.637522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.637834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.637849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.638186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.638578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.638594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.638920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.639323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.639337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.639741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.640155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.640169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.640514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.640931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.640943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.641333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.641614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.641627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.641951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.642305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.642318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.642713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.643053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.643066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.643414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.643797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.643809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.644223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.644552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.644564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.644990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.645352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.645365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.645705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.646113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.646126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.646483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.646863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.646875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.647204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.647542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.647554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.647837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.648171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.648184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.648535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.648804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.648816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.649236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.649620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.649632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.650055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.650488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.650501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.650836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.651230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.651246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.651571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.651920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.651932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.652383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.652711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.652723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.653058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.653356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.653370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.653702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.654183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.654197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.107 [2024-04-26 16:10:52.654492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.654843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.107 [2024-04-26 16:10:52.654856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.107 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.654867] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.108 [2024-04-26 16:10:52.654905] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.108 [2024-04-26 16:10:52.654916] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.108 [2024-04-26 16:10:52.654926] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.108 [2024-04-26 16:10:52.654934] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.108 [2024-04-26 16:10:52.655118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:13.108 [2024-04-26 16:10:52.655242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.655196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:13.108 [2024-04-26 16:10:52.655263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:13.108 [2024-04-26 16:10:52.655285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:13.108 [2024-04-26 16:10:52.655586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.655600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.655980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.656355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.656369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.656661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.657129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.657142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.657481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.657755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.657768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.658194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.658526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.658539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.658878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.659147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.659160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.659502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.659791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.659804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.660164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.660454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.660467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.660746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.661069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.661090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.661578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.661979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.661992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.662509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.662942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.662955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.663393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.663735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.663748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.664216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.664527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.664541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.664875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.665278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.665292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.665569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.665863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.665876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.666297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.666717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.666731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.667079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.667518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.667531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.667827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.668262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.668276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.668684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.669100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.669113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.669520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.669805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.669817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.670229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.670538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.670551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.670951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.671302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.671315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.671748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.672117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.672130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.672490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.672780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.672792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.673168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.673500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.673512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.673781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.674113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.674126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.674402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.674741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.674754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.108 qpair failed and we were unable to recover it. 00:28:13.108 [2024-04-26 16:10:52.675156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.675492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.108 [2024-04-26 16:10:52.675505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.675847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.676207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.676220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.676627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.676995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.677008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.677395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.677759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.677771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.678156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.678592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.678605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.678945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.679274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.679287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.679635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.680007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.680020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.680365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.680699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.680712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.681146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.681427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.681439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.681850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.682214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.682227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.682648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.683059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.683075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.683486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.683850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.683863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.684147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.684494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.684507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.684931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.685332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.685345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.685636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.686080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.686093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.686414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.686704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.686717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.687123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.687448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.687462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.687756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.688102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.688116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.688459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.688790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.688802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.689218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.689557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.689569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.689963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.690305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.690319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.690737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.691136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.691149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.691554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.691836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.691853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.692138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.692424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.692436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.692725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.693130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.693143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.693408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.693679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.693692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.694086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.694418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.694431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.694766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.695179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.695192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.695533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.695906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.695918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.696276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.696626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.109 [2024-04-26 16:10:52.696639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.109 qpair failed and we were unable to recover it. 00:28:13.109 [2024-04-26 16:10:52.697035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.697380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.697393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.697736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.698084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.698097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.698480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.698769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.698781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.699199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.699534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.699546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.699996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.700329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.700342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.700761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.701163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.701176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.701526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.701915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.701927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.702291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.702700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.702713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.703117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.703515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.703527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.703864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.704210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.704223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.704556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.704845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.704857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.705201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.705501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.705513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.705880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.706282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.706295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.706616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.706981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.706993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.707401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.707688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.707700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.708039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.708449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.708462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.708804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.709196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.709209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.709571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.709977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.709989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.710385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.710662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.710676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.710978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.711376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.711389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.711775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.712198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.712213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.712548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.712998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.713010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.713453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.713811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.713824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.714176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.714532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.714545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.110 [2024-04-26 16:10:52.714837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.715199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.110 [2024-04-26 16:10:52.715212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.110 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.715551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.715910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.715922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.716253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.716596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.716609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.716941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.717321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.717334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.717629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.718055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.718068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.718345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.718626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.718639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.719060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.719474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.719489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.719832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.720256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.720268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.720562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.721031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.721043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.721411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.721784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.427 [2024-04-26 16:10:52.721798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.427 qpair failed and we were unable to recover it. 00:28:13.427 [2024-04-26 16:10:52.722208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.722481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.722493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.722852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.723235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.723248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.723529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.723815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.723828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.724164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.724506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.724519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.724803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.725129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.725142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.725515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.725842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.725855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.726235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.726631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.726646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.727078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.727504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.727517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.727853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.728196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.728209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.728533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.728945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.728958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.729293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.729703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.729716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.730045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.730413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.730426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.730839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.731227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.731240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.731574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.731975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.731988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.732401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.732767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.732784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.733228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.733611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.733624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.734042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.734368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.734387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.734776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.735201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.735214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.428 [2024-04-26 16:10:52.735587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.735982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.428 [2024-04-26 16:10:52.735994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.428 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.736420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.736747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.736759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.737169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.737531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.737545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.737870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.738215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.738230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.738615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.738987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.739002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.739392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.739727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.739740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.740257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.740593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.740607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.740987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.741265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.741280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.741651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.742086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.742100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.742516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.742968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.742982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.743379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.743663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.743676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.744028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.744412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.744426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.744767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.745126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.745139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.745478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.745830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.745843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.746134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.746534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.746548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.746880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.747265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.747279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.747618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.747902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.747916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.748373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.748763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.748777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.749127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.749454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.749469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.749813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.750161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.750175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.750520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.750807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.750821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.429 qpair failed and we were unable to recover it. 00:28:13.429 [2024-04-26 16:10:52.751170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.751425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.429 [2024-04-26 16:10:52.751439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.751717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.752049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.752064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.752480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.752821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.752836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.753219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.753515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.753529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.753814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.754280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.754294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.754635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.755061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.755080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.755461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.755818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.755833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.756266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.756623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.756638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.757076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.757397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.757410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.757757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.758086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.758101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.758497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.758831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.758845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.759253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.759671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.759685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.760100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.760505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.760518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.760800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.761215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.761230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.761573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.761980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.761995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.762399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.762786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.762800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.763214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.763605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.763618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.764023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.764308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.764322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.764669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.765011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.765023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.765352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.765733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.765747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.766098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.766371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.766384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.430 qpair failed and we were unable to recover it. 00:28:13.430 [2024-04-26 16:10:52.766858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.430 [2024-04-26 16:10:52.767269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.767282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.767564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.767903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.767915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.768306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.768640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.768653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.768995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.769379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.769392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.769779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.770159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.770172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.770443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.770850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.770863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.771281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.771568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.771580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.771972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.772374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.772386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.772726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.773177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.773190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.773622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.774048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.774060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.774409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.774810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.774832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.775232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.775579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.775592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.776042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.776405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.776418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.776780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.777229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.777242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.777589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.777971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.777984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.778319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.778639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.778651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.779077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.779463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.779476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.779817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.780223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.780236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.780558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.780966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.780978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.781358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.781769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.431 [2024-04-26 16:10:52.781781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.431 qpair failed and we were unable to recover it. 00:28:13.431 [2024-04-26 16:10:52.782186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.782457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.782469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.782858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.783140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.783153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.783490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.783894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.783907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.784303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.784641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.784654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.784984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.785306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.785319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.785612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.786024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.786036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.786367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.786667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.786680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.787080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.787409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.787421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.787757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.788159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.788173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.788467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.788751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.788764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.789028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.789457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.789470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.789752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.790099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.790113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.790500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.790955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.790967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.791354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.791737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.791749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.792192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.792469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.792481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.792906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.793236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.793249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.793585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.793994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.794007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.794372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.794707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.794720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.795146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.795476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.795489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.432 [2024-04-26 16:10:52.795789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.796119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.432 [2024-04-26 16:10:52.796132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.432 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.796479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.796797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.796809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.797175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.797512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.797525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.797846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.798252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.798265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.798604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.799021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.799034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.799393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.799731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.799744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.800133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.800493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.800506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.800840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.801176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.801188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.801532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.801918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.801930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.802265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.802651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.802663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.803080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.803437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.803450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.803888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.804300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.804313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.804657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.804947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.804960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.805391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.805747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.805759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.806061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.806459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.806472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.806803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.807086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.807099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.807511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.807846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.807858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.808253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.808657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.808670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.809073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.809386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.809399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.433 qpair failed and we were unable to recover it. 00:28:13.433 [2024-04-26 16:10:52.809740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.433 [2024-04-26 16:10:52.810152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.810166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.810458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.810840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.810852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.811269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.811605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.811618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.812106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.812507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.812519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.812853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.813125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.813138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.813528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.813881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.813893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.814299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.814697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.814709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.815044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.815372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.815385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.815721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.816137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.816150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.816439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.816829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.816842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.817265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.817640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.817657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.817998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.818332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.818345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.818678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.819030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.819042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.819430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.819840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.819853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.820273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.820657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.820669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.821057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.821401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.821414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.434 qpair failed and we were unable to recover it. 00:28:13.434 [2024-04-26 16:10:52.821773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.822121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.434 [2024-04-26 16:10:52.822134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.822500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.822909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.822922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.823277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.823559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.823572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.823984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.824311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.824324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.824747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.825077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.825090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.825434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.825784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.825797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.826136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.826683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.826696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.827143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.827573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.827589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.828028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.828364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.828385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.828772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.829110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.829123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.829404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.829747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.829759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.830149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.830478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.830490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.830783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.831180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.831193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.831531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.831850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.831865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.832230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.832504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.832516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.832825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.833178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.833191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.833575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.833856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.833868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.834279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.834616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.834628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.835049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.835424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.835437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.835777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.836182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.836195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.836476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.836810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.836823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.837198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.837588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.435 [2024-04-26 16:10:52.837601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.435 qpair failed and we were unable to recover it. 00:28:13.435 [2024-04-26 16:10:52.838045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.838379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.838392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.838736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.839138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.839154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.839672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.840026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.840038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.840431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.840714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.840726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.841122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.841546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.841558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.841854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.842186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.842199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.842557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.842887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.842900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.843169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.843511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.843523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.843863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.844144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.844157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.844439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.844778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.844790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.845195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.845601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.845614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.845996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.846285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.846300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.846639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.847084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.847097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.847444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.847773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.847785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.848143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.848427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.848440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.848707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.849096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.849109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.849383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.849714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.849726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.850178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.850562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.850574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.850913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.851324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.851336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.851674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.852081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.852094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.852510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.852786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.852799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.853211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.853569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.853584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.853973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.854257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.854270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.854579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.855053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.855065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.436 qpair failed and we were unable to recover it. 00:28:13.436 [2024-04-26 16:10:52.855373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.436 [2024-04-26 16:10:52.855760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.855773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.856185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.856533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.856546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.856880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.857262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.857275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.857672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.858058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.858074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.858422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.858807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.858820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.859231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.859523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.859535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.859824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.860134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.860147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.860509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.860894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.860906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.861321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.861658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.861671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.862096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.862367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.862380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.862721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.863067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.863083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.863415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.863770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.863782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.864141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.864532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.864544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.864964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.865347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.865360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.865701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.866048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.866060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.866396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.866665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.866677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.866999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.867499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.867512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.867916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.868304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.868316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.868733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.869136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.869149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.869517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.869869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.869881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.870203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.870536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.870549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.870950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.871331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.871344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.437 qpair failed and we were unable to recover it. 00:28:13.437 [2024-04-26 16:10:52.871634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.872091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.437 [2024-04-26 16:10:52.872104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.872434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.872717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.872729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.873148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.873450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.873463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.873742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.874148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.874161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.874572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.874845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.874857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.875277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.875559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.875571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.875917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.876361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.876374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.876716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.877066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.877082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.877367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.877787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.877800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.878212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.878721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.878733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.879159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.879584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.879596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.880032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.880365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.880381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.880666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.881059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.881075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.881434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.881835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.881847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.882257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.882653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.882665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.883076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.883407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.883420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.883766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.884112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.884125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.884458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.884812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.884823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.885258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.885627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.885639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.886046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.886414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.886427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.886781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.887189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.887202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.887533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.887886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.887898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.888310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.888630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.888642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.889056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.889411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.889424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.438 [2024-04-26 16:10:52.889702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.890037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.438 [2024-04-26 16:10:52.890049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.438 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.890399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.890745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.890757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.891110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.891447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.891460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.891753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.892157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.892170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.892500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.892833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.892845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.893243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.893523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.893536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.893945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.894206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.894219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.894567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.894829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.894842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.895121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.895396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.895408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.895745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.896156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.896169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.896458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.896741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.896754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.897153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.897541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.897553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.897941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.898263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.898276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.898627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.899044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.899056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.899407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.899736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.899748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.900124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.900409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.900422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.900818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.901172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.901184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.901524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.901849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.901861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.902273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.902608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.902620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.902974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.903379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.903393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.439 qpair failed and we were unable to recover it. 00:28:13.439 [2024-04-26 16:10:52.903659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.904119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.439 [2024-04-26 16:10:52.904132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.904520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.904798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.904810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.905207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.905686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.905699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.906136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.906496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.906509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.906856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.907265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.907278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.907673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.908024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.908037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.908452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.908792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.908804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.909194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.909598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.909612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.910037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.910299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.910311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.910653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.911068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.911085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.911424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.911671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.911684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.912035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.912322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.912335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.912727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.913166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.913193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.913557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.913923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.913941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.914294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.914693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.914712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.915065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.915469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.915488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.915767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.916067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.916093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.916468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.916756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.916775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.917084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.917530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.917558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.917860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.918165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.918188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000002440 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.918531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.918807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.918822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.919162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.919512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.919526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.919940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.920234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.920248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.920596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.920916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.920930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.440 qpair failed and we were unable to recover it. 00:28:13.440 [2024-04-26 16:10:52.921368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.440 [2024-04-26 16:10:52.921635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.921651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.921921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.922193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.922207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.922587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.922914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.922932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.923295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.923628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.923645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.923922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.924249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.924265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.924545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.924882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.924897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.925180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.925465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.925479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.925834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.926185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.926203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.926550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.926743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.926762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.927129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.927394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.927407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.927690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.927953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.927966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.928242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.928501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.928515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.928837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.929166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.929179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.929465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.929869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.929881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.930164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.930551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.930564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.930900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.931100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.931112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.931447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.931929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.931941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.932277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.932592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.932605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.932967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.933236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.933249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.933616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.933867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.933879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.934267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.934589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.934602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.934867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.935145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.935158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.935618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.935936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.935948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.936277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.936623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.936636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.441 [2024-04-26 16:10:52.936968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.937236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.441 [2024-04-26 16:10:52.937249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.441 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.937523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.937900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.937912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.938302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.938640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.938652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.938991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.939262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.939275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.939606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.939890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.939903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.940275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.940634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.940647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.940987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.941184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.941197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.941556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.941878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.941890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.942180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.942457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.942470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.942729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.943112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.943125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.943479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.943757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.943771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.944098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.944510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.944522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.944786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.945127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.945140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.945422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.945744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.945757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.946082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.946362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.946377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.946722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.947004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.947017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.947368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.947684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.947697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.948034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.948504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.948517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.948857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.949171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.949184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.949572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.949910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.949923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.950052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.950436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.950449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.950807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.951063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.951080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.951414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.951794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.951806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.952219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.952433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.952445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.442 qpair failed and we were unable to recover it. 00:28:13.442 [2024-04-26 16:10:52.952704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.442 [2024-04-26 16:10:52.953031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.953046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.953336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.953613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.953625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.953896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.954016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.954028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.954293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.954642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.954654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.954922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.955315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.955328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.955660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.955933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.955946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.956274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.956760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.956772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.957040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.957382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.957395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.957668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.957938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.957950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.958290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.958558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.958571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.958836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.959160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.959185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.959458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.959802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.959815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.960089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.960414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.960426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.960757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.961029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.961041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.961297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.961624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.961637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.962030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.962309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.962322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.962674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.963059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.963085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.963366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.963698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.963710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.964049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.964472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.964486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.964820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.965170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.965183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.965509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.965784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.965796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.966132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.966476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.966488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.966878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.967206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.967219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.967499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.967831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.967843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.968110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.968433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.968446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.443 qpair failed and we were unable to recover it. 00:28:13.443 [2024-04-26 16:10:52.968713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 16:10:52.969078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.969092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.969414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.969661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.969673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.969943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.970271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.970285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.970606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.970861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.970873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.971152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.971532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.971545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.971792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.972059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.972075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.972416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.972815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.972827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.973160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.973420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.973432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.973690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.974046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.974058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.974483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.974803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.974816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.975148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.975483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.975495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.975819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.976205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.976217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.976479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.976801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.976813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.977086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.977401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.977413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.977761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.978156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.978168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.978436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.978829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.978842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.979174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.979579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.979592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.979960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.980367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.980379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.980791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.981126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.981139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.981460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.981787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.981799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.982171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.982499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.982512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.982886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.983289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.983302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.983563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.983946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.983959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.984153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.984481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.984494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.984832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.985225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.985238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.985568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.985892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.985904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.986243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.986562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.444 [2024-04-26 16:10:52.986575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.444 qpair failed and we were unable to recover it. 00:28:13.444 [2024-04-26 16:10:52.986848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.987175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.987189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.987619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.987979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.987992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.988263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.988675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.988687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.989099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.989521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.989534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.989886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.990228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.990241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.990627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.990963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.990977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.991369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.991697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.991710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.992060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.992447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.992459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.992780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.993106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.993119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.993482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.993884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.993897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.994178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.994504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.994517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.994864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.995185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.995198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.995586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.995789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.995801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.996073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.996477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.996490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.996829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.997152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.997165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.997504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.997755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.997768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.998031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.998305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.998318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.998740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.998981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.998993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:52.999319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.999597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:52.999610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:53.000032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:53.000316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.445 [2024-04-26 16:10:53.000329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.445 qpair failed and we were unable to recover it. 00:28:13.445 [2024-04-26 16:10:53.000689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.001000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.001013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.001345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.001624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.001640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.002024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.002380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.002393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.002600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.002981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.002994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.003411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.003833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.003846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.004101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.004446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.004459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.004747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.005065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.005081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.005271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.005556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.005569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.005838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.006218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.006231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.006498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.006830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.006843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.007177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.007560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.007573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.007848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.008230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.008244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.008588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.008924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.008937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.009274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.009600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.009613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.009940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.010282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.010295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.010556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.010961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.010974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.011312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.011647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.011669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.011940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.012323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.012337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.012662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.013068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.013083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.013280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.013661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.013674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.014059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.014393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.014406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.014740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.015142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.015155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.015493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.015850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.015863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.016147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.016417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.016430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.016814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.017157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.017171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.017423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.017830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.017843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.018181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.018565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.018579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.446 [2024-04-26 16:10:53.018964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.019280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.446 [2024-04-26 16:10:53.019294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.446 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.019614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.020015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.020029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.020438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.020768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.020782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.021113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.021449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.021462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.021871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.022197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.022210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.022652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.023043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.023056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.023380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.023784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.023798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.024119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.024526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.024539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.024859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.025124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.025137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.025546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.025871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.025884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.026296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.026634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.026647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.026979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.027242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.027255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.027606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.027923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.027936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.028262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.028596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.028610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.029025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.029361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.029375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.029805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.030186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.030200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.030396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.030794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.030806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.031059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.031392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.031405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.031686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.032032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.032045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.032409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.032789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.032802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.033148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.033422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.033435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.033622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.034047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.034060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.034344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.034684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.034698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.035032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.035438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.035452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.035775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.036050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.036063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.036384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.036785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.036798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.037181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.037566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.037580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.037846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.038249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.038262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.038675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.039065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.039081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.447 [2024-04-26 16:10:53.039502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.039895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.447 [2024-04-26 16:10:53.039908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.447 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.040298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.040637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.040650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.041060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.041351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.041364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.041771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.042120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.042134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.042486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.042847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.042860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.043243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.043524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.043536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.043921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.044235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.044249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.044599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.044956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.044968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.045325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.045592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.045605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.046018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.046401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.046414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.046814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.047167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.047181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.047585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.047929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.047941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.048356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.048699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.048712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.049034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.049441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.049457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.049844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.050195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.050209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.050546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.050944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.050958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.051307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.051587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.051601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.051987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.052391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.052404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.052792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.053117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.053130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.053470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.053731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.053743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.054105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.054513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.054526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.054869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.055282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.055296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.055566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.055972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.055985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.056344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.056620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.056635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.056962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.057364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.057377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.057652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.058007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.058020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.058341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.058748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.448 [2024-04-26 16:10:53.058761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.448 qpair failed and we were unable to recover it. 00:28:13.448 [2024-04-26 16:10:53.059195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.059526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.059539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.059903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.060231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.060244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.060653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.060982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.060994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.061406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.061770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.061783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.062217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.062536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.062549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.062969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.063327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.063341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.063684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.064007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.064023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.064388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.064818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.064831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.065200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.065585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.065598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.065985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.066397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.066410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.066740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.067020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.067033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.067309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.067584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.067597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.068136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.068518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.068530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.068832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.069097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.069111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.069382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.069762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.069776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.070134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.070454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.070467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.070810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.071091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.071109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.071644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.071989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.072002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.072224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.072559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.072572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.072911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.073252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.073265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.073653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.073809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.073822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.074152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.074543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.074557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.074920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.075306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.075319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.075602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.075928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.075940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.076287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.076606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.076618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.076939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.077343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.077357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.449 qpair failed and we were unable to recover it. 00:28:13.449 [2024-04-26 16:10:53.077710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.078124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.449 [2024-04-26 16:10:53.078136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.078481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.078819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.078831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.079156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.079481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.079493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.079763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.080166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.080179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.080545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.080877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.080890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.081212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.081549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.081561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.081982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.082296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.082310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.082694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.083103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.083116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.083512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.083795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.083807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.084138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.084519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.084532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.084944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.085335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.085348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.085711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.086113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.086126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.086421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.086800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.086813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.087190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.087520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.087533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.087846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.088274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.088287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.088619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.089076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.089089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.089435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.089840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.089851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.090252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.090658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.090670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.091052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.091501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.091514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.091916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.092318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.092331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.092608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.093029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.093044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.093441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.093848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.093861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 [2024-04-26 16:10:53.094251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.094659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 [2024-04-26 16:10:53.094671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.450 qpair failed and we were unable to recover it. 00:28:13.450 16:10:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:13.450 [2024-04-26 16:10:53.095017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.450 16:10:53 -- common/autotest_common.sh@850 -- # return 0 00:28:13.450 [2024-04-26 16:10:53.095426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.095439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.451 qpair failed and we were unable to recover it. 00:28:13.451 16:10:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:13.451 16:10:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:13.451 [2024-04-26 16:10:53.095827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 16:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:13.451 [2024-04-26 16:10:53.096235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.096248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.451 qpair failed and we were unable to recover it. 00:28:13.451 [2024-04-26 16:10:53.096589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.096980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.096993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.451 qpair failed and we were unable to recover it. 00:28:13.451 [2024-04-26 16:10:53.097352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.097700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.097713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.451 qpair failed and we were unable to recover it. 00:28:13.451 [2024-04-26 16:10:53.097997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.098314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.098334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.451 qpair failed and we were unable to recover it. 00:28:13.451 [2024-04-26 16:10:53.098609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.098993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.451 [2024-04-26 16:10:53.099007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.451 qpair failed and we were unable to recover it. 00:28:13.714 [2024-04-26 16:10:53.099410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.099750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.099763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-04-26 16:10:53.100159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.100565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.100578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-04-26 16:10:53.100839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.101242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.101255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-04-26 16:10:53.101670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.102079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.102093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-04-26 16:10:53.102433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.102762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.102775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-04-26 16:10:53.103167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.103580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.103593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.714 [2024-04-26 16:10:53.103877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.104205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.714 [2024-04-26 16:10:53.104239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.714 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.104583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.105033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.105046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.105519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.105954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.105967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.106403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.106767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.106780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.107167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.107521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.107534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.107970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.108372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.108386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.108784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.109120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.109133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.109487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.109866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.109879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.110266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.110552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.110565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.110997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.111433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.111446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.111787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.112148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.112160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.112573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.112991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.113004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.113378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.113716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.113729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.114113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.114452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.114465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.114748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.115087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.115100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.115514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.115845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.115857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.116267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.116556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.116569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.117013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.117426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.117440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.117802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.118206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.118220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.118619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.119039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.119051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.119407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.119764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.119776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.120161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.120500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.120512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.120858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.121274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.121287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.121671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.122092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.122105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.122492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.122779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.122792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.123089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.123427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.123440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.123785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.124205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.715 [2024-04-26 16:10:53.124218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.715 qpair failed and we were unable to recover it. 00:28:13.715 [2024-04-26 16:10:53.124632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.125005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.125017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.125383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.125771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.125785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.126154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.126492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.126505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.126972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.127356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.127369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.127710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.128025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.128038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.128349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.128687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.128700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.129094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.129453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.129466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.129804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.130210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.130223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 16:10:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.716 [2024-04-26 16:10:53.130514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 16:10:53 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:13.716 [2024-04-26 16:10:53.130799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.130814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 16:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.716 [2024-04-26 16:10:53.131223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 16:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:13.716 [2024-04-26 16:10:53.131560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.131574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.131915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.132253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.132267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.132604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.132945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.132957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.133240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.133511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.133524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.133930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.134363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.134376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.134660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.135086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.135098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.135389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.135793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.135805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.136215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.136547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.136560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.136974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.137394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.137408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.137798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.138209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.138222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.138514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.138794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.138807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.139148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.139483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.139498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.139793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.140197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.140211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.140494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.140774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.140787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.141198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.141480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.141493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.141816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.142202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.716 [2024-04-26 16:10:53.142216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.716 qpair failed and we were unable to recover it. 00:28:13.716 [2024-04-26 16:10:53.142629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.143030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.143044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.143457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.143796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.143809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.144147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.144508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.144522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.144820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.145149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.145164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.145486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.145930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.145950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.146375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.146718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.146732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.147174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.147510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.147525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.147944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.148373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.148390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.148777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.149186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.149203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.149612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.150025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.150038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.150438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.150798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.150810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.151252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.151634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.151647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.152096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.152499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.152511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.152875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.153304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.153318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.153658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.154086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.154099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.154438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.154757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.154769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.155197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.155548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.155561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.155948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.156287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.156301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.156707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.157100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.157113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.157522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.157943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.157956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.158348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.158771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.158784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.159195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.159483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.159496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.159833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.160238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.160251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.160575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.160983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.160995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.161395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.161730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.161742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.162155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.162479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.162491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.717 qpair failed and we were unable to recover it. 00:28:13.717 [2024-04-26 16:10:53.162877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.717 [2024-04-26 16:10:53.163199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.163211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.163623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.164047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.164060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.164472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.164802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.164815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.165167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.165482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.165494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.165940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.166340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.166352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.166699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.167101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.167113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.167519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.167787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.167800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.168191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.168524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.168536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.168989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.169267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.169281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.169690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.170031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.170044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.170457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.170848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.170860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.171245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.171627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.171640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.171992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.172257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.172270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.172603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.173010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.173023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.173437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.173755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.173767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.174178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.174502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.174514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.174926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.175317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.175330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.175746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.176073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.176086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.176472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.176803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.176815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.177221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.177501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.177513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.177852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.178188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.178201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.178622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.179005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.179017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.179361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.179689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.179701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.180115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.180459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.180471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.180884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.181162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.181175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.181564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.181946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.181958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.718 [2024-04-26 16:10:53.182293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.182697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.718 [2024-04-26 16:10:53.182710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.718 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.183075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.183468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.183480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.183891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.184281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.184294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.184708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.185098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.185111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.185394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.185712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.185725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.186067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.186474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.186487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.186838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.187243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.187255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.187654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.188058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.188073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.188406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.188668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.188681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.189020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.189470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.189482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.189892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.190222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.190235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.190649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.191001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.191013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.191399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.191805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.191818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.192215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.192621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.192634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.193033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.193411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.193424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.193789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.194113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.194126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.194541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.194929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.194941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.195355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.195733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.195745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.196155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.196549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.196561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.196854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.197255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.197268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.719 qpair failed and we were unable to recover it. 00:28:13.719 [2024-04-26 16:10:53.197596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.719 [2024-04-26 16:10:53.197999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.198011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.198350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.198759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.198772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.199164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.199497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.199510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.199925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.200193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.200206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.200539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.200924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.200941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.201274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.201676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.201688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.202096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.202493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.202506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.202787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.203198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.203211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.203531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.203897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.203910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.204245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.204585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.204598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.205008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.205396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.205410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.205674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.206082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.206095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.206480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.206865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.206877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.207261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.207642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.207655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.208067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.208388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.208401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.208767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.209174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.209187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.209583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.209991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.210003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.210392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.210776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.210788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.211135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.211483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.211496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.211931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.212312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.212325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.212677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.213085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.213098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.213436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.213827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.213839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.214249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.214639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.214651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.215038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.215373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.215386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.215803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.216218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.216231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.216618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.216996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.217009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.720 [2024-04-26 16:10:53.217422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.217817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.720 [2024-04-26 16:10:53.217829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.720 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.218238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.218589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.218602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.219010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.219328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.219341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.219747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.220040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.220052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.220401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.220731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.220743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.221153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.221539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.221552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 Malloc0 00:28:13.721 [2024-04-26 16:10:53.221942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.222269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.222282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 16:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.721 [2024-04-26 16:10:53.222602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 16:10:53 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:13.721 [2024-04-26 16:10:53.223026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.223039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 16:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.721 [2024-04-26 16:10:53.223448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 16:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:13.721 [2024-04-26 16:10:53.223844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.223857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.224200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.224525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.224538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.224956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.225359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.225372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.225695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.226079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.226091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.226500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.226904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.226916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.227241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.227568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.227581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.227993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.228375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.228388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.228712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.229044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.229056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.229450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.229471] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.721 [2024-04-26 16:10:53.229858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.229871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.230155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.230485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.230498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.230905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.231237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.231251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.231633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.231959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.231972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.232386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.232649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.232663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.233076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.233499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.233514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.233816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.234084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.234099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.234451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.234779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.234793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.235202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.235539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.235555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.235967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.236313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.236327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.721 qpair failed and we were unable to recover it. 00:28:13.721 [2024-04-26 16:10:53.236677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.721 [2024-04-26 16:10:53.237059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.237076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.237414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.237748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.237762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.238096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 16:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.722 16:10:53 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:13.722 [2024-04-26 16:10:53.238478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.238493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 16:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.722 [2024-04-26 16:10:53.238900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 16:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:13.722 [2024-04-26 16:10:53.239294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.239309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.239628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.239982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.239996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.240350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.240630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.240644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.241035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.241445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.241460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.241850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.242256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.242271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.242667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.243004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.243018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.243347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.243751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.243765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.244150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.244478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.244492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.244903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.245238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.245252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.245588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.245945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.245959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 16:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.722 [2024-04-26 16:10:53.246367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 16:10:53 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:13.722 16:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.722 [2024-04-26 16:10:53.246776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.246793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 16:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:13.722 [2024-04-26 16:10:53.247185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.247542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.247557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.247834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.248218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.248232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.248585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.249018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.249031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.249448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.249835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.249849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.250177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.250598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.250612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.251037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.251429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.251443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.251855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.252215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.252229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.252564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.252897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.252911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.253263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.253696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.253710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 [2024-04-26 16:10:53.253991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 16:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.722 [2024-04-26 16:10:53.254324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.254339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.722 qpair failed and we were unable to recover it. 00:28:13.722 16:10:53 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:13.722 [2024-04-26 16:10:53.254746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 16:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.722 [2024-04-26 16:10:53.255025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.722 [2024-04-26 16:10:53.255041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 16:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:13.723 [2024-04-26 16:10:53.255382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-04-26 16:10:53.255716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-04-26 16:10:53.255730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.256146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-04-26 16:10:53.256534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-04-26 16:10:53.256551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.256935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-04-26 16:10:53.257265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-04-26 16:10:53.257280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000020040 with addr=10.0.0.2, port=4420 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.257666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.723 [2024-04-26 16:10:53.257737] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.723 16:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.723 16:10:53 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:13.723 16:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.723 [2024-04-26 16:10:53.263032] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:28:13.723 [2024-04-26 16:10:53.263098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000020040 (107): Transport endpoint is not connected 00:28:13.723 16:10:53 -- common/autotest_common.sh@10 -- # set +x 00:28:13.723 [2024-04-26 16:10:53.263205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 16:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.723 16:10:53 -- host/target_disconnect.sh@58 -- # wait 2605573 00:28:13.723 [2024-04-26 16:10:53.271079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.271244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.271271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.271285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.271296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:13.723 [2024-04-26 16:10:53.271323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.280960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.281124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.281149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.281163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.281173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:13.723 [2024-04-26 16:10:53.281197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.291005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.291158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.291188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.291201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.291214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:13.723 [2024-04-26 16:10:53.291238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.300992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.301146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.301168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.301180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.301189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:13.723 [2024-04-26 16:10:53.301211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.311004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.311153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.311177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.311189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.311198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:13.723 [2024-04-26 16:10:53.311221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.321123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.321261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.321285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.321296] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.321306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:13.723 [2024-04-26 16:10:53.321329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.331100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.331246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.331270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.331281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.331290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:13.723 [2024-04-26 16:10:53.331313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.341224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.341390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.341425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.341446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.341459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.723 [2024-04-26 16:10:53.341492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.351218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.351378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.351406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.723 [2024-04-26 16:10:53.351425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.723 [2024-04-26 16:10:53.351436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.723 [2024-04-26 16:10:53.351463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.723 qpair failed and we were unable to recover it. 00:28:13.723 [2024-04-26 16:10:53.361162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.723 [2024-04-26 16:10:53.361302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.723 [2024-04-26 16:10:53.361327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.724 [2024-04-26 16:10:53.361339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.724 [2024-04-26 16:10:53.361348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.724 [2024-04-26 16:10:53.361372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-04-26 16:10:53.371269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.724 [2024-04-26 16:10:53.371416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.724 [2024-04-26 16:10:53.371440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.724 [2024-04-26 16:10:53.371452] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.724 [2024-04-26 16:10:53.371461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.724 [2024-04-26 16:10:53.371484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-04-26 16:10:53.381231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.724 [2024-04-26 16:10:53.381373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.724 [2024-04-26 16:10:53.381398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.724 [2024-04-26 16:10:53.381413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.724 [2024-04-26 16:10:53.381422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.724 [2024-04-26 16:10:53.381444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.724 [2024-04-26 16:10:53.391328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.724 [2024-04-26 16:10:53.391534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.724 [2024-04-26 16:10:53.391559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.724 [2024-04-26 16:10:53.391571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.724 [2024-04-26 16:10:53.391588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.724 [2024-04-26 16:10:53.391612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.724 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.401236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.401383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.401408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.401420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.401429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.401452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.411414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.411558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.411583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.411595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.411604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.411627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.421363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.421656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.421680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.421691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.421701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.421724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.431413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.431555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.431578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.431590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.431598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.431621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.441422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.441563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.441586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.441597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.441606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.441628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.451390] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.451533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.451555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.451567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.451575] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.451598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.461433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.461578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.461600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.461612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.461620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.461644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.471637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.471780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.471806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.471818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.471826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.471848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.481503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.481714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.481737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.481749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.481758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.481781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.491573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.491892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.491915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.491926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.491936] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.491959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.501571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.501716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.501739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.501751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.501760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.501783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.511554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.511700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.511723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.511734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.511743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.511770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.521673] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.521855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.521878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.521890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.521899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.521921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.531675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.531998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.532021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.532034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.532043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.532065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.541669] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.541808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.541831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.541843] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.541851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.541873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.551765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.551901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.551924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.551936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.551945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.551967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.561766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.561904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.561930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.561941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.561950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.561973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.571813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.571956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.571978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.571990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.571999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.572021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.581823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.581954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.581977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.581988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.581997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.582020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.591886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.592029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.982 [2024-04-26 16:10:53.592051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.982 [2024-04-26 16:10:53.592063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.982 [2024-04-26 16:10:53.592077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.982 [2024-04-26 16:10:53.592100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.982 qpair failed and we were unable to recover it. 00:28:13.982 [2024-04-26 16:10:53.601823] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.982 [2024-04-26 16:10:53.601958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.983 [2024-04-26 16:10:53.601981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.983 [2024-04-26 16:10:53.601992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.983 [2024-04-26 16:10:53.602001] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.983 [2024-04-26 16:10:53.602027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.983 qpair failed and we were unable to recover it. 00:28:13.983 [2024-04-26 16:10:53.611951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.983 [2024-04-26 16:10:53.612100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.983 [2024-04-26 16:10:53.612123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.983 [2024-04-26 16:10:53.612134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.983 [2024-04-26 16:10:53.612143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.983 [2024-04-26 16:10:53.612165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.983 qpair failed and we were unable to recover it. 00:28:13.983 [2024-04-26 16:10:53.621884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.983 [2024-04-26 16:10:53.622026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.983 [2024-04-26 16:10:53.622048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.983 [2024-04-26 16:10:53.622059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.983 [2024-04-26 16:10:53.622068] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.983 [2024-04-26 16:10:53.622099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.983 qpair failed and we were unable to recover it. 00:28:13.983 [2024-04-26 16:10:53.631982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.983 [2024-04-26 16:10:53.632128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.983 [2024-04-26 16:10:53.632152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.983 [2024-04-26 16:10:53.632164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.983 [2024-04-26 16:10:53.632172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.983 [2024-04-26 16:10:53.632196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.983 qpair failed and we were unable to recover it. 00:28:13.983 [2024-04-26 16:10:53.641974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.983 [2024-04-26 16:10:53.642158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.983 [2024-04-26 16:10:53.642182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.983 [2024-04-26 16:10:53.642194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.983 [2024-04-26 16:10:53.642203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.983 [2024-04-26 16:10:53.642225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.983 qpair failed and we were unable to recover it. 00:28:13.983 [2024-04-26 16:10:53.651934] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.983 [2024-04-26 16:10:53.652079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.983 [2024-04-26 16:10:53.652108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.983 [2024-04-26 16:10:53.652120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.983 [2024-04-26 16:10:53.652129] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.983 [2024-04-26 16:10:53.652151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.983 qpair failed and we were unable to recover it. 00:28:13.983 [2024-04-26 16:10:53.662063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:13.983 [2024-04-26 16:10:53.662209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:13.983 [2024-04-26 16:10:53.662235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:13.983 [2024-04-26 16:10:53.662248] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:13.983 [2024-04-26 16:10:53.662261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:13.983 [2024-04-26 16:10:53.662291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:13.983 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.672074] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.672217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.672242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.242 [2024-04-26 16:10:53.672254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.242 [2024-04-26 16:10:53.672263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.242 [2024-04-26 16:10:53.672286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.242 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.682029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.682172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.682196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.242 [2024-04-26 16:10:53.682208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.242 [2024-04-26 16:10:53.682217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.242 [2024-04-26 16:10:53.682240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.242 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.692089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.692236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.692259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.242 [2024-04-26 16:10:53.692270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.242 [2024-04-26 16:10:53.692282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.242 [2024-04-26 16:10:53.692305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.242 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.702249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.702404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.702426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.242 [2024-04-26 16:10:53.702438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.242 [2024-04-26 16:10:53.702446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.242 [2024-04-26 16:10:53.702468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.242 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.712227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.712414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.712437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.242 [2024-04-26 16:10:53.712448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.242 [2024-04-26 16:10:53.712458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.242 [2024-04-26 16:10:53.712481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.242 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.722190] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.722361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.722384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.242 [2024-04-26 16:10:53.722395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.242 [2024-04-26 16:10:53.722404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.242 [2024-04-26 16:10:53.722426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.242 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.732278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.732416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.732439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.242 [2024-04-26 16:10:53.732451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.242 [2024-04-26 16:10:53.732459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.242 [2024-04-26 16:10:53.732482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.242 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.742306] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.742448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.742471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.242 [2024-04-26 16:10:53.742483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.242 [2024-04-26 16:10:53.742492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.242 [2024-04-26 16:10:53.742515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.242 qpair failed and we were unable to recover it. 00:28:14.242 [2024-04-26 16:10:53.752351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.242 [2024-04-26 16:10:53.752490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.242 [2024-04-26 16:10:53.752512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.752523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.752532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.752559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.762352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.762507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.762532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.762544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.762553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.762576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.772468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.772627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.772650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.772661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.772670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.772693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.782460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.782741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.782764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.782779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.782789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.782811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.792404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.792547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.792571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.792582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.792591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.792613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.802380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.802520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.802542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.802554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.802562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.802584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.812515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.812660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.812683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.812694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.812702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.812724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.822519] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.822652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.822676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.822688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.822697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.822720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.832605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.832792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.832814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.832827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.832836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.832859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.842687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.842855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.842878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.842890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.842899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.842923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.852658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.852816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.852839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.852851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.852859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.852882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.862619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.862767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.862790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.862801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.862809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.862831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.243 [2024-04-26 16:10:53.872726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.243 [2024-04-26 16:10:53.872900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.243 [2024-04-26 16:10:53.872925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.243 [2024-04-26 16:10:53.872936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.243 [2024-04-26 16:10:53.872946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.243 [2024-04-26 16:10:53.872968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.243 qpair failed and we were unable to recover it. 00:28:14.244 [2024-04-26 16:10:53.882633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.244 [2024-04-26 16:10:53.882764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.244 [2024-04-26 16:10:53.882787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.244 [2024-04-26 16:10:53.882798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.244 [2024-04-26 16:10:53.882807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.244 [2024-04-26 16:10:53.882829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.244 qpair failed and we were unable to recover it. 00:28:14.244 [2024-04-26 16:10:53.892739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.244 [2024-04-26 16:10:53.892878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.244 [2024-04-26 16:10:53.892901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.244 [2024-04-26 16:10:53.892913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.244 [2024-04-26 16:10:53.892921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.244 [2024-04-26 16:10:53.892944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.244 qpair failed and we were unable to recover it. 00:28:14.244 [2024-04-26 16:10:53.902794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.244 [2024-04-26 16:10:53.902930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.244 [2024-04-26 16:10:53.902953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.244 [2024-04-26 16:10:53.902965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.244 [2024-04-26 16:10:53.902974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.244 [2024-04-26 16:10:53.903003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.244 qpair failed and we were unable to recover it. 00:28:14.244 [2024-04-26 16:10:53.912851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.244 [2024-04-26 16:10:53.912990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.244 [2024-04-26 16:10:53.913013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.244 [2024-04-26 16:10:53.913024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.244 [2024-04-26 16:10:53.913032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.244 [2024-04-26 16:10:53.913058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.244 qpair failed and we were unable to recover it. 00:28:14.244 [2024-04-26 16:10:53.922765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.244 [2024-04-26 16:10:53.922903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.244 [2024-04-26 16:10:53.922928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.244 [2024-04-26 16:10:53.922942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.244 [2024-04-26 16:10:53.922951] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.244 [2024-04-26 16:10:53.922975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.244 qpair failed and we were unable to recover it. 00:28:14.503 [2024-04-26 16:10:53.932874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.503 [2024-04-26 16:10:53.933012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.503 [2024-04-26 16:10:53.933037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.503 [2024-04-26 16:10:53.933050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.503 [2024-04-26 16:10:53.933059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.503 [2024-04-26 16:10:53.933091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.503 qpair failed and we were unable to recover it. 00:28:14.503 [2024-04-26 16:10:53.942976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.503 [2024-04-26 16:10:53.943163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.503 [2024-04-26 16:10:53.943187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.503 [2024-04-26 16:10:53.943200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.503 [2024-04-26 16:10:53.943210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.503 [2024-04-26 16:10:53.943232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.503 qpair failed and we were unable to recover it. 00:28:14.503 [2024-04-26 16:10:53.953115] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.503 [2024-04-26 16:10:53.953397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.503 [2024-04-26 16:10:53.953419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.503 [2024-04-26 16:10:53.953430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.503 [2024-04-26 16:10:53.953439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.503 [2024-04-26 16:10:53.953461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.503 qpair failed and we were unable to recover it. 00:28:14.503 [2024-04-26 16:10:53.962834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.503 [2024-04-26 16:10:53.962972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.503 [2024-04-26 16:10:53.963001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.503 [2024-04-26 16:10:53.963013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.503 [2024-04-26 16:10:53.963023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.503 [2024-04-26 16:10:53.963045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.503 qpair failed and we were unable to recover it. 00:28:14.503 [2024-04-26 16:10:53.973039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.503 [2024-04-26 16:10:53.973185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.503 [2024-04-26 16:10:53.973209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.503 [2024-04-26 16:10:53.973221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.503 [2024-04-26 16:10:53.973230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.503 [2024-04-26 16:10:53.973252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.503 qpair failed and we were unable to recover it. 00:28:14.503 [2024-04-26 16:10:53.982957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.503 [2024-04-26 16:10:53.983097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.503 [2024-04-26 16:10:53.983121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.503 [2024-04-26 16:10:53.983132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.503 [2024-04-26 16:10:53.983141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.503 [2024-04-26 16:10:53.983168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.503 qpair failed and we were unable to recover it. 00:28:14.503 [2024-04-26 16:10:53.993018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.503 [2024-04-26 16:10:53.993195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.503 [2024-04-26 16:10:53.993218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:53.993229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:53.993239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:53.993262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.002964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.003114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.003138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.003150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.003159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.003185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.013091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.013238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.013261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.013273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.013281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.013304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.023096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.023235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.023259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.023270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.023279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.023301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.033205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.033346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.033370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.033382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.033391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.033413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.043050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.043199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.043223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.043234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.043242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.043264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.053163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.053302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.053329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.053341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.053350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.053372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.063266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.063403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.063426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.063437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.063446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.063468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.073222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.073362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.073385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.073396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.073405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.073428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.083224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.083368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.083392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.083404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.083414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.083437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.093301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.093453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.093476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.093488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.093500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.093523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.103260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.103408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.103431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.504 [2024-04-26 16:10:54.103442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.504 [2024-04-26 16:10:54.103450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.504 [2024-04-26 16:10:54.103472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.504 qpair failed and we were unable to recover it. 00:28:14.504 [2024-04-26 16:10:54.113333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.504 [2024-04-26 16:10:54.113471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.504 [2024-04-26 16:10:54.113494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.505 [2024-04-26 16:10:54.113505] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.505 [2024-04-26 16:10:54.113514] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.505 [2024-04-26 16:10:54.113537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.505 qpair failed and we were unable to recover it. 00:28:14.505 [2024-04-26 16:10:54.123370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.505 [2024-04-26 16:10:54.123512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.505 [2024-04-26 16:10:54.123536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.505 [2024-04-26 16:10:54.123547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.505 [2024-04-26 16:10:54.123556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.505 [2024-04-26 16:10:54.123578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.505 qpair failed and we were unable to recover it. 00:28:14.505 [2024-04-26 16:10:54.133359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.505 [2024-04-26 16:10:54.133495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.505 [2024-04-26 16:10:54.133518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.505 [2024-04-26 16:10:54.133529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.505 [2024-04-26 16:10:54.133537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.505 [2024-04-26 16:10:54.133559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.505 qpair failed and we were unable to recover it. 00:28:14.505 [2024-04-26 16:10:54.143382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.505 [2024-04-26 16:10:54.143525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.505 [2024-04-26 16:10:54.143547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.505 [2024-04-26 16:10:54.143558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.505 [2024-04-26 16:10:54.143568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.505 [2024-04-26 16:10:54.143589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.505 qpair failed and we were unable to recover it. 00:28:14.505 [2024-04-26 16:10:54.153617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.505 [2024-04-26 16:10:54.153761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.505 [2024-04-26 16:10:54.153783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.505 [2024-04-26 16:10:54.153795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.505 [2024-04-26 16:10:54.153803] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.505 [2024-04-26 16:10:54.153826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.505 qpair failed and we were unable to recover it. 00:28:14.505 [2024-04-26 16:10:54.163405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.505 [2024-04-26 16:10:54.163543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.505 [2024-04-26 16:10:54.163566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.505 [2024-04-26 16:10:54.163583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.505 [2024-04-26 16:10:54.163592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.505 [2024-04-26 16:10:54.163615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.505 qpair failed and we were unable to recover it. 00:28:14.505 [2024-04-26 16:10:54.173488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.505 [2024-04-26 16:10:54.173625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.505 [2024-04-26 16:10:54.173647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.505 [2024-04-26 16:10:54.173658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.505 [2024-04-26 16:10:54.173667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.505 [2024-04-26 16:10:54.173689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.505 qpair failed and we were unable to recover it. 00:28:14.505 [2024-04-26 16:10:54.183460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.505 [2024-04-26 16:10:54.183601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.505 [2024-04-26 16:10:54.183626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.505 [2024-04-26 16:10:54.183641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.505 [2024-04-26 16:10:54.183650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.505 [2024-04-26 16:10:54.183673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.505 qpair failed and we were unable to recover it. 00:28:14.764 [2024-04-26 16:10:54.193578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.764 [2024-04-26 16:10:54.193715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.764 [2024-04-26 16:10:54.193739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.764 [2024-04-26 16:10:54.193751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.764 [2024-04-26 16:10:54.193760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.764 [2024-04-26 16:10:54.193784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.764 qpair failed and we were unable to recover it. 00:28:14.764 [2024-04-26 16:10:54.203593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.764 [2024-04-26 16:10:54.203729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.764 [2024-04-26 16:10:54.203753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.764 [2024-04-26 16:10:54.203764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.764 [2024-04-26 16:10:54.203774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.764 [2024-04-26 16:10:54.203797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.764 qpair failed and we were unable to recover it. 00:28:14.764 [2024-04-26 16:10:54.213558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.764 [2024-04-26 16:10:54.213702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.764 [2024-04-26 16:10:54.213726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.764 [2024-04-26 16:10:54.213738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.764 [2024-04-26 16:10:54.213747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.764 [2024-04-26 16:10:54.213775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.764 qpair failed and we were unable to recover it. 00:28:14.764 [2024-04-26 16:10:54.223594] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.764 [2024-04-26 16:10:54.223731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.764 [2024-04-26 16:10:54.223753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.764 [2024-04-26 16:10:54.223764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.764 [2024-04-26 16:10:54.223773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.764 [2024-04-26 16:10:54.223794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.764 qpair failed and we were unable to recover it. 00:28:14.764 [2024-04-26 16:10:54.233660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.764 [2024-04-26 16:10:54.233793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.764 [2024-04-26 16:10:54.233817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.764 [2024-04-26 16:10:54.233828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.764 [2024-04-26 16:10:54.233837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.764 [2024-04-26 16:10:54.233859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.764 qpair failed and we were unable to recover it. 00:28:14.764 [2024-04-26 16:10:54.243609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.764 [2024-04-26 16:10:54.243760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.764 [2024-04-26 16:10:54.243785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.764 [2024-04-26 16:10:54.243796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.764 [2024-04-26 16:10:54.243806] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.764 [2024-04-26 16:10:54.243828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.764 qpair failed and we were unable to recover it. 00:28:14.764 [2024-04-26 16:10:54.253682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.764 [2024-04-26 16:10:54.253821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.764 [2024-04-26 16:10:54.253844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.253855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.253864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.253886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.263705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.263846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.263870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.263883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.263892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.263914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.273783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.273924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.273946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.273961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.273970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.273992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.283804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.283961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.283984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.283996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.284005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.284027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.293786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.293933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.293955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.293967] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.293975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.293997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.303803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.303942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.303964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.303976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.303985] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.304007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.313893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.314038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.314063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.314083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.314094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.314117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.323838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.323974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.323997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.324009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.324018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.324040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.333952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.334100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.334122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.334133] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.334141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.334163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.344003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.344151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.344174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.344185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.344193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.344215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.354037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.354203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.354226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.354237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.354246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.354268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.363909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.364047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.364081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.364093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.364101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.364124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.765 [2024-04-26 16:10:54.374020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.765 [2024-04-26 16:10:54.374163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.765 [2024-04-26 16:10:54.374186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.765 [2024-04-26 16:10:54.374198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.765 [2024-04-26 16:10:54.374207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.765 [2024-04-26 16:10:54.374228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.765 qpair failed and we were unable to recover it. 00:28:14.766 [2024-04-26 16:10:54.384086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.766 [2024-04-26 16:10:54.384228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.766 [2024-04-26 16:10:54.384250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.766 [2024-04-26 16:10:54.384261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.766 [2024-04-26 16:10:54.384270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.766 [2024-04-26 16:10:54.384292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.766 qpair failed and we were unable to recover it. 00:28:14.766 [2024-04-26 16:10:54.394211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.766 [2024-04-26 16:10:54.394349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.766 [2024-04-26 16:10:54.394371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.766 [2024-04-26 16:10:54.394382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.766 [2024-04-26 16:10:54.394391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.766 [2024-04-26 16:10:54.394413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.766 qpair failed and we were unable to recover it. 00:28:14.766 [2024-04-26 16:10:54.404108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.766 [2024-04-26 16:10:54.404247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.766 [2024-04-26 16:10:54.404270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.766 [2024-04-26 16:10:54.404281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.766 [2024-04-26 16:10:54.404290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.766 [2024-04-26 16:10:54.404315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.766 qpair failed and we were unable to recover it. 00:28:14.766 [2024-04-26 16:10:54.414106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.766 [2024-04-26 16:10:54.414242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.766 [2024-04-26 16:10:54.414264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.766 [2024-04-26 16:10:54.414276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.766 [2024-04-26 16:10:54.414284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.766 [2024-04-26 16:10:54.414307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.766 qpair failed and we were unable to recover it. 00:28:14.766 [2024-04-26 16:10:54.424285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.766 [2024-04-26 16:10:54.424417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.766 [2024-04-26 16:10:54.424440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.766 [2024-04-26 16:10:54.424452] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.766 [2024-04-26 16:10:54.424460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.766 [2024-04-26 16:10:54.424482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.766 qpair failed and we were unable to recover it. 00:28:14.766 [2024-04-26 16:10:54.434256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.766 [2024-04-26 16:10:54.434398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.766 [2024-04-26 16:10:54.434421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.766 [2024-04-26 16:10:54.434434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.766 [2024-04-26 16:10:54.434443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.766 [2024-04-26 16:10:54.434465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.766 qpair failed and we were unable to recover it. 00:28:14.766 [2024-04-26 16:10:54.444240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:14.766 [2024-04-26 16:10:54.444397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:14.766 [2024-04-26 16:10:54.444422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:14.766 [2024-04-26 16:10:54.444434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:14.766 [2024-04-26 16:10:54.444443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:14.766 [2024-04-26 16:10:54.444471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.766 qpair failed and we were unable to recover it. 00:28:15.025 [2024-04-26 16:10:54.454218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.025 [2024-04-26 16:10:54.454357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.025 [2024-04-26 16:10:54.454385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.025 [2024-04-26 16:10:54.454397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.025 [2024-04-26 16:10:54.454406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.025 [2024-04-26 16:10:54.454429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.025 qpair failed and we were unable to recover it. 00:28:15.025 [2024-04-26 16:10:54.464374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.464514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.464537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.464549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.464557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.464580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.474469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.474603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.474628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.474641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.474650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.474673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.484317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.484453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.484476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.484487] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.484496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.484518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.494431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.494572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.494594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.494606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.494618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.494640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.504482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.504637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.504661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.504672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.504680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.504702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.514548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.514700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.514723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.514734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.514743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.514765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.524448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.524729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.524752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.524763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.524772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.524795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.534511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.534648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.534671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.534684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.534692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.534714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.544725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.544865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.544887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.544899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.544908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.544930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.554636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.554776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.554798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.554809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.554818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.554839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.564550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.564690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.564713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.564724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.564732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.564754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.574648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.574788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.574813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.574825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.574835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.574857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.584702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.584848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.584870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.584884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.584893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.584916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.594788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.594942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.594964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.594976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.594984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.595007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.604681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.604959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.604983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.604994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.605004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.605026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.614727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.614866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.614888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.614900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.614909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.614931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.624716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.624854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.624878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.624890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.624900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.624923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.634852] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.634992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.635014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.635026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.635035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.635057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.644782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.644963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.644985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.644997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.645007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.645028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.654895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.655218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.655243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.026 [2024-04-26 16:10:54.655254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.026 [2024-04-26 16:10:54.655263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.026 [2024-04-26 16:10:54.655285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.026 qpair failed and we were unable to recover it. 00:28:15.026 [2024-04-26 16:10:54.664911] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.026 [2024-04-26 16:10:54.665201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.026 [2024-04-26 16:10:54.665224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.027 [2024-04-26 16:10:54.665237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.027 [2024-04-26 16:10:54.665246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.027 [2024-04-26 16:10:54.665268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.027 qpair failed and we were unable to recover it. 00:28:15.027 [2024-04-26 16:10:54.674947] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.027 [2024-04-26 16:10:54.675106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.027 [2024-04-26 16:10:54.675128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.027 [2024-04-26 16:10:54.675143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.027 [2024-04-26 16:10:54.675158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.027 [2024-04-26 16:10:54.675183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.027 qpair failed and we were unable to recover it. 00:28:15.027 [2024-04-26 16:10:54.684928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.027 [2024-04-26 16:10:54.685080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.027 [2024-04-26 16:10:54.685104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.027 [2024-04-26 16:10:54.685116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.027 [2024-04-26 16:10:54.685124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.027 [2024-04-26 16:10:54.685146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.027 qpair failed and we were unable to recover it. 00:28:15.027 [2024-04-26 16:10:54.695005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.027 [2024-04-26 16:10:54.695148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.027 [2024-04-26 16:10:54.695171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.027 [2024-04-26 16:10:54.695182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.027 [2024-04-26 16:10:54.695191] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.027 [2024-04-26 16:10:54.695213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.027 qpair failed and we were unable to recover it. 00:28:15.027 [2024-04-26 16:10:54.705052] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.027 [2024-04-26 16:10:54.705203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.027 [2024-04-26 16:10:54.705228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.027 [2024-04-26 16:10:54.705241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.027 [2024-04-26 16:10:54.705250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.027 [2024-04-26 16:10:54.705274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.027 qpair failed and we were unable to recover it. 00:28:15.286 [2024-04-26 16:10:54.715105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.286 [2024-04-26 16:10:54.715244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.286 [2024-04-26 16:10:54.715270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.286 [2024-04-26 16:10:54.715283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.286 [2024-04-26 16:10:54.715293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.286 [2024-04-26 16:10:54.715316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.286 qpair failed and we were unable to recover it. 00:28:15.286 [2024-04-26 16:10:54.725028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.286 [2024-04-26 16:10:54.725169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.286 [2024-04-26 16:10:54.725194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.286 [2024-04-26 16:10:54.725207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.286 [2024-04-26 16:10:54.725217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.286 [2024-04-26 16:10:54.725239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.286 qpair failed and we were unable to recover it. 00:28:15.286 [2024-04-26 16:10:54.735104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.286 [2024-04-26 16:10:54.735240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.286 [2024-04-26 16:10:54.735262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.286 [2024-04-26 16:10:54.735274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.286 [2024-04-26 16:10:54.735283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.286 [2024-04-26 16:10:54.735305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.286 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.745136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.745281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.745304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.745315] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.745324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.745346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.755171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.755314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.755337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.755349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.755357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.755379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.765174] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.765382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.765409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.765420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.765430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.765451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.775244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.775404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.775426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.775438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.775446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.775469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.785332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.785476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.785499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.785511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.785520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.785542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.795249] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.795383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.795406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.795417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.795426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.795449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.805246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.805437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.805458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.805469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.805479] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.805503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.815337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.815475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.815498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.815510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.815519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.815541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.825444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.825618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.825641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.825654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.825663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.825686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.835471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.835648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.835670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.835682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.835692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.835715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.287 qpair failed and we were unable to recover it. 00:28:15.287 [2024-04-26 16:10:54.845417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.287 [2024-04-26 16:10:54.845550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.287 [2024-04-26 16:10:54.845573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.287 [2024-04-26 16:10:54.845585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.287 [2024-04-26 16:10:54.845593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.287 [2024-04-26 16:10:54.845617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.855477] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.855656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.855682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.855695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.855704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.855726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.865481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.865619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.865641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.865653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.865662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.865684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.875519] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.875664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.875687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.875699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.875707] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.875729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.885536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.885676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.885698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.885710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.885718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.885740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.895586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.895724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.895746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.895758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.895770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.895792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.905653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.905840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.905863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.905875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.905884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.905926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.915674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.915813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.915836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.915848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.915857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.915880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.925799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.925936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.925961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.925973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.925983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.926005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.935716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.935856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.935885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.935897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.935906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.935929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.945728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.945870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.945893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.945905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.945914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.945936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.955803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.955938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.288 [2024-04-26 16:10:54.955960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.288 [2024-04-26 16:10:54.955971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.288 [2024-04-26 16:10:54.955980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.288 [2024-04-26 16:10:54.956002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.288 qpair failed and we were unable to recover it. 00:28:15.288 [2024-04-26 16:10:54.965765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.288 [2024-04-26 16:10:54.965907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.289 [2024-04-26 16:10:54.965932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.289 [2024-04-26 16:10:54.965944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.289 [2024-04-26 16:10:54.965953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.289 [2024-04-26 16:10:54.965976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.289 qpair failed and we were unable to recover it. 00:28:15.546 [2024-04-26 16:10:54.975814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:54.975988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:54.976013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:54.976025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:54.976035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:54.976059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:54.985842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:54.985985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:54.986008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:54.986020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:54.986032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:54.986054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:54.995869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:54.996001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:54.996026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:54.996038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:54.996048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:54.996077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.005884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.006055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.006084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.006096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.006105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.006127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.015879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.016014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.016037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.016048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.016058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.016088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.025951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.026093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.026118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.026130] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.026140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.026163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.036049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.036201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.036224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.036236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.036246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.036268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.045999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.046151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.046173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.046185] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.046194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.046216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.056097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.056269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.056293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.056304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.056313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.056335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.066022] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.066303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.066327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.066338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.066347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.066369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.076156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.076298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.076320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.076334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.076343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.076365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.086090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.086232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.547 [2024-04-26 16:10:55.086255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.547 [2024-04-26 16:10:55.086266] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.547 [2024-04-26 16:10:55.086275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.547 [2024-04-26 16:10:55.086297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-04-26 16:10:55.096196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.547 [2024-04-26 16:10:55.096337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.096360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.096372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.096382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.096404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.106185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.106327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.106350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.106362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.106370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.106393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.116293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.116434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.116455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.116467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.116475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.116497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.126237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.126377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.126400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.126411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.126419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.126441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.136287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.136432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.136455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.136466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.136474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.136501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.146299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.146448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.146471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.146482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.146491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.146514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.156321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.156466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.156488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.156500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.156508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.156530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.166410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.166589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.166615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.166627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.166636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.166659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.176372] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.176658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.176681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.176692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.176701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.176723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.186438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.186618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.186640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.186653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.186663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.186692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.196468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.196746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.196769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.196780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.196789] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.196812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.206444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.206597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.206620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.206631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.206640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.206666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-04-26 16:10:55.216566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.548 [2024-04-26 16:10:55.216726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.548 [2024-04-26 16:10:55.216749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.548 [2024-04-26 16:10:55.216760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.548 [2024-04-26 16:10:55.216769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.548 [2024-04-26 16:10:55.216791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-04-26 16:10:55.226541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.549 [2024-04-26 16:10:55.226681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.549 [2024-04-26 16:10:55.226706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.549 [2024-04-26 16:10:55.226718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.549 [2024-04-26 16:10:55.226727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.549 [2024-04-26 16:10:55.226749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.236640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.236822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.236846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.236859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.236868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.236891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.246563] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.246853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.246877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.246889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.246898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.246921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.256568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.256708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.256733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.256745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.256754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.256776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.266668] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.266807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.266829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.266841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.266850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.266872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.276609] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.276748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.276770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.276781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.276790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.276812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.286705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.286847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.286869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.286880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.286889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.286911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.296726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.296884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.296906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.296918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.296927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.296952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.306797] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.306941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.306963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.306974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.306983] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.307006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.316758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.316905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.316928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.316939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.316947] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.316969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.326818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.808 [2024-04-26 16:10:55.326962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.808 [2024-04-26 16:10:55.326985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.808 [2024-04-26 16:10:55.326996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.808 [2024-04-26 16:10:55.327005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.808 [2024-04-26 16:10:55.327027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.808 qpair failed and we were unable to recover it. 00:28:15.808 [2024-04-26 16:10:55.336779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.336915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.336938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.336949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.336957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.336979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.346904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.347044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.347066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.347084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.347093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.347115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.356981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.357284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.357307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.357319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.357328] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.357351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.366867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.367010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.367034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.367045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.367054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.367089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.376933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.377068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.377098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.377109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.377118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.377140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.387026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.387169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.387192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.387203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.387217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.387239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.396989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.397148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.397170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.397182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.397190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.397212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.407041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.407194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.407216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.407229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.407237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.407260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.417049] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.417196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.417219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.417230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.417238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.417261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.427112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.427264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.427286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.427298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.427306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:15.809 [2024-04-26 16:10:55.427328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.437179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.437485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.437522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.437541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.437556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:15.809 [2024-04-26 16:10:55.437590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.447130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.447279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.447302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.447314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.809 [2024-04-26 16:10:55.447324] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:15.809 [2024-04-26 16:10:55.447348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.809 qpair failed and we were unable to recover it. 00:28:15.809 [2024-04-26 16:10:55.457125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.809 [2024-04-26 16:10:55.457267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.809 [2024-04-26 16:10:55.457289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.809 [2024-04-26 16:10:55.457301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.810 [2024-04-26 16:10:55.457310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:15.810 [2024-04-26 16:10:55.457332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.810 qpair failed and we were unable to recover it. 00:28:15.810 [2024-04-26 16:10:55.467217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.810 [2024-04-26 16:10:55.467352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.810 [2024-04-26 16:10:55.467377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.810 [2024-04-26 16:10:55.467389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.810 [2024-04-26 16:10:55.467399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:15.810 [2024-04-26 16:10:55.467421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.810 qpair failed and we were unable to recover it. 00:28:15.810 [2024-04-26 16:10:55.477335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.810 [2024-04-26 16:10:55.477487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.810 [2024-04-26 16:10:55.477510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.810 [2024-04-26 16:10:55.477524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.810 [2024-04-26 16:10:55.477533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:15.810 [2024-04-26 16:10:55.477556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.810 qpair failed and we were unable to recover it. 00:28:15.810 [2024-04-26 16:10:55.487267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:15.810 [2024-04-26 16:10:55.487407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:15.810 [2024-04-26 16:10:55.487430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:15.810 [2024-04-26 16:10:55.487441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.810 [2024-04-26 16:10:55.487450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:15.810 [2024-04-26 16:10:55.487472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.810 qpair failed and we were unable to recover it. 00:28:16.069 [2024-04-26 16:10:55.497363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.069 [2024-04-26 16:10:55.497502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.069 [2024-04-26 16:10:55.497525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.069 [2024-04-26 16:10:55.497536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.069 [2024-04-26 16:10:55.497545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.069 [2024-04-26 16:10:55.497567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.069 qpair failed and we were unable to recover it. 00:28:16.069 [2024-04-26 16:10:55.507427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.069 [2024-04-26 16:10:55.507572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.069 [2024-04-26 16:10:55.507595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.069 [2024-04-26 16:10:55.507606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.507615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.507637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.517313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.517454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.517477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.517488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.517497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.517519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.527382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.527547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.527569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.527581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.527590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.527612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.537437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.537583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.537605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.537617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.537626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.537647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.547468] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.547638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.547660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.547672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.547682] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.547704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.557569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.557750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.557772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.557783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.557792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.557815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.567480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.567617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.567642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.567654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.567662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.567685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.577592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.577730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.577753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.577764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.577772] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.577800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.587578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.587728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.587750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.587762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.587770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.587792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.597644] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.597807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.597830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.597842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.597852] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.597874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.607679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.607820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.607843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.607855] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.607864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.607890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.617606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.617746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.617768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.617779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.617788] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.617810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.070 qpair failed and we were unable to recover it. 00:28:16.070 [2024-04-26 16:10:55.627716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.070 [2024-04-26 16:10:55.627856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.070 [2024-04-26 16:10:55.627878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.070 [2024-04-26 16:10:55.627890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.070 [2024-04-26 16:10:55.627899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.070 [2024-04-26 16:10:55.627926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.637894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.638028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.638051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.638063] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.638079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.638102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.647744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.647881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.647904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.647916] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.647925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.647947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.657757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.657895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.657921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.657932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.657941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.657963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.667862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.668001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.668023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.668034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.668042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.668065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.677977] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.678127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.678150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.678162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.678171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.678194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.687853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.688016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.688038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.688049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.688058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.688090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.697956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.698126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.698151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.698164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.698172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.698198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.708080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.708221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.708243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.708255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.708263] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.708286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.718034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.718173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.718196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.718208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.718217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.718239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.727984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.728128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.728150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.728162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.728170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.728192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.738063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.738210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.738232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.738243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.738252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.738274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.071 [2024-04-26 16:10:55.748170] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.071 [2024-04-26 16:10:55.748322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.071 [2024-04-26 16:10:55.748348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.071 [2024-04-26 16:10:55.748361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.071 [2024-04-26 16:10:55.748369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.071 [2024-04-26 16:10:55.748391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.071 qpair failed and we were unable to recover it. 00:28:16.331 [2024-04-26 16:10:55.758104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.758241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.758264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.758275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.758284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.758306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.768169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.768303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.768326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.768337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.768352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.768374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.778185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.778322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.778344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.778355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.778364] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.778386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.788351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.788498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.788520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.788531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.788543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.788565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.798265] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.798404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.798426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.798437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.798446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.798467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.808346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.808481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.808503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.808514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.808522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.808545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.818305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.818447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.818470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.818481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.818490] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.818512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.828350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.828498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.828520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.828532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.828540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.828562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.838344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.838631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.838654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.838665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.838674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.838696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.848392] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.848539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.848562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.848573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.848582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.848604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.858469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.858605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.858628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.858640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.858649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.858675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.868410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.868593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.332 [2024-04-26 16:10:55.868615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.332 [2024-04-26 16:10:55.868627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.332 [2024-04-26 16:10:55.868637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.332 [2024-04-26 16:10:55.868659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.332 qpair failed and we were unable to recover it. 00:28:16.332 [2024-04-26 16:10:55.878441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.332 [2024-04-26 16:10:55.878577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.878600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.878615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.878624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.878647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.888450] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.888585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.888608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.888621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.888631] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.888653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.898447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.898584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.898606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.898617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.898625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.898647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.908692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.908877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.908900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.908912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.908921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.908943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.918681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.918858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.918880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.918891] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.918900] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.918923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.928558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.928699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.928722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.928733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.928742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.928764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.938596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.938749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.938771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.938783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.938791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.938814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.948730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.948868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.948891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.948902] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.948911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.948932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.958746] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.958881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.958903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.958915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.958923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.958946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.968705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.968988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.969011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.969025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.969035] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.969056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.978757] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.333 [2024-04-26 16:10:55.978934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.333 [2024-04-26 16:10:55.978956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.333 [2024-04-26 16:10:55.978968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.333 [2024-04-26 16:10:55.978977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.333 [2024-04-26 16:10:55.978999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.333 qpair failed and we were unable to recover it. 00:28:16.333 [2024-04-26 16:10:55.988775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.334 [2024-04-26 16:10:55.988917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.334 [2024-04-26 16:10:55.988939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.334 [2024-04-26 16:10:55.988951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.334 [2024-04-26 16:10:55.988959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.334 [2024-04-26 16:10:55.988981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.334 qpair failed and we were unable to recover it. 00:28:16.334 [2024-04-26 16:10:55.998769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.334 [2024-04-26 16:10:55.998948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.334 [2024-04-26 16:10:55.998971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.334 [2024-04-26 16:10:55.998982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.334 [2024-04-26 16:10:55.998992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.334 [2024-04-26 16:10:55.999014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.334 qpair failed and we were unable to recover it. 00:28:16.334 [2024-04-26 16:10:56.008723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.334 [2024-04-26 16:10:56.008863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.334 [2024-04-26 16:10:56.008886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.334 [2024-04-26 16:10:56.008898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.334 [2024-04-26 16:10:56.008907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.334 [2024-04-26 16:10:56.008928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.334 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.018826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.019039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.019062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.019080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.019090] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.019112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.028952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.029106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.029133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.029145] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.029153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.029176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.038935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.039065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.039094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.039106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.039114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.039136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.048968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.049131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.049154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.049165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.049174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.049196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.059012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.059158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.059183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.059194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.059203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.059226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.069062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.069219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.069241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.069253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.069261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.069283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.079090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.079236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.079258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.079269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.079278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.079300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.089059] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.089203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.089225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.089237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.089245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.089272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.099191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.099358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.099380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.099391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.099399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.099424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.109135] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.594 [2024-04-26 16:10:56.109414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.594 [2024-04-26 16:10:56.109437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.594 [2024-04-26 16:10:56.109448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.594 [2024-04-26 16:10:56.109457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.594 [2024-04-26 16:10:56.109479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.594 qpair failed and we were unable to recover it. 00:28:16.594 [2024-04-26 16:10:56.119257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.119406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.119429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.119441] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.119450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.119472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.129077] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.129218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.129239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.129251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.129259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.129281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.139287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.139428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.139449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.139460] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.139469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.139491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.149259] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.149399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.149424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.149435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.149444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.149465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.159378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.159560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.159582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.159593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.159602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.159624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.169279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.169424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.169446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.169457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.169466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.169488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.179394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.179573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.179595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.179607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.179615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.179637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.189376] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.189529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.189550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.189562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.189574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.189595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.199361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.199545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.199567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.199579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.199588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.199610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.209379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.209534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.209556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.209567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.209576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.209598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.219490] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.219629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.219651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.219662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.219670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.219692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.229483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.229667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.229690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.229702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.229712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.229734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.239469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.239616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.239638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.239649] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.239658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.239680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.249454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.249594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.249617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.249629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.249639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.249662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.259596] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.259742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.259764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.259775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.259784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.259806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.595 [2024-04-26 16:10:56.269751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.595 [2024-04-26 16:10:56.269891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.595 [2024-04-26 16:10:56.269913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.595 [2024-04-26 16:10:56.269924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.595 [2024-04-26 16:10:56.269932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.595 [2024-04-26 16:10:56.269954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.595 qpair failed and we were unable to recover it. 00:28:16.855 [2024-04-26 16:10:56.279681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.855 [2024-04-26 16:10:56.279820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.855 [2024-04-26 16:10:56.279842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.855 [2024-04-26 16:10:56.279857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.855 [2024-04-26 16:10:56.279866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.855 [2024-04-26 16:10:56.279909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.855 qpair failed and we were unable to recover it. 00:28:16.855 [2024-04-26 16:10:56.289625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.855 [2024-04-26 16:10:56.289806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.289828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.289840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.289849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.289871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.299775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.299915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.299938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.299949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.299958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.299981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.309812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.309980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.310001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.310013] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.310021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.310044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.319760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.320045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.320068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.320086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.320095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.320122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.329726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.329867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.329888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.329900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.329908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.329930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.339882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.340024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.340046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.340056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.340065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.340094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.349828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.349967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.349988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.350000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.350008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.350030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.359846] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.359983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.360004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.360015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.360024] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.360046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.370001] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.370155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.370178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.370194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.370203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.370225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.379929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.856 [2024-04-26 16:10:56.380105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.856 [2024-04-26 16:10:56.380127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.856 [2024-04-26 16:10:56.380139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.856 [2024-04-26 16:10:56.380148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.856 [2024-04-26 16:10:56.380170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.856 qpair failed and we were unable to recover it. 00:28:16.856 [2024-04-26 16:10:56.390000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.390151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.390173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.390184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.857 [2024-04-26 16:10:56.390193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.857 [2024-04-26 16:10:56.390215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.857 qpair failed and we were unable to recover it. 00:28:16.857 [2024-04-26 16:10:56.399986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.400134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.400156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.400168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.857 [2024-04-26 16:10:56.400176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.857 [2024-04-26 16:10:56.400198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.857 qpair failed and we were unable to recover it. 00:28:16.857 [2024-04-26 16:10:56.409968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.410246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.410269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.410280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.857 [2024-04-26 16:10:56.410289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.857 [2024-04-26 16:10:56.410311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.857 qpair failed and we were unable to recover it. 00:28:16.857 [2024-04-26 16:10:56.420075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.420248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.420270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.420281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.857 [2024-04-26 16:10:56.420290] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.857 [2024-04-26 16:10:56.420313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.857 qpair failed and we were unable to recover it. 00:28:16.857 [2024-04-26 16:10:56.430142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.430293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.430314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.430325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.857 [2024-04-26 16:10:56.430334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.857 [2024-04-26 16:10:56.430356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.857 qpair failed and we were unable to recover it. 00:28:16.857 [2024-04-26 16:10:56.440150] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.440326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.440348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.440359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.857 [2024-04-26 16:10:56.440369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.857 [2024-04-26 16:10:56.440391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.857 qpair failed and we were unable to recover it. 00:28:16.857 [2024-04-26 16:10:56.450078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.450217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.450239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.450250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.857 [2024-04-26 16:10:56.450259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.857 [2024-04-26 16:10:56.450280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.857 qpair failed and we were unable to recover it. 00:28:16.857 [2024-04-26 16:10:56.460221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.460367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.460395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.460407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.857 [2024-04-26 16:10:56.460415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.857 [2024-04-26 16:10:56.460437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.857 qpair failed and we were unable to recover it. 00:28:16.857 [2024-04-26 16:10:56.470217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.857 [2024-04-26 16:10:56.470375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.857 [2024-04-26 16:10:56.470398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.857 [2024-04-26 16:10:56.470409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.858 [2024-04-26 16:10:56.470417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.858 [2024-04-26 16:10:56.470439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.858 qpair failed and we were unable to recover it. 00:28:16.858 [2024-04-26 16:10:56.480316] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.858 [2024-04-26 16:10:56.480493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.858 [2024-04-26 16:10:56.480516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.858 [2024-04-26 16:10:56.480528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.858 [2024-04-26 16:10:56.480538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.858 [2024-04-26 16:10:56.480560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.858 qpair failed and we were unable to recover it. 00:28:16.858 [2024-04-26 16:10:56.490222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.858 [2024-04-26 16:10:56.490509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.858 [2024-04-26 16:10:56.490532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.858 [2024-04-26 16:10:56.490543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.858 [2024-04-26 16:10:56.490552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.858 [2024-04-26 16:10:56.490574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.858 qpair failed and we were unable to recover it. 00:28:16.858 [2024-04-26 16:10:56.500267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.858 [2024-04-26 16:10:56.500429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.858 [2024-04-26 16:10:56.500452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.858 [2024-04-26 16:10:56.500463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.858 [2024-04-26 16:10:56.500473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.858 [2024-04-26 16:10:56.500497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.858 qpair failed and we were unable to recover it. 00:28:16.858 [2024-04-26 16:10:56.510347] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.858 [2024-04-26 16:10:56.510497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.858 [2024-04-26 16:10:56.510519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.858 [2024-04-26 16:10:56.510530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.858 [2024-04-26 16:10:56.510538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.858 [2024-04-26 16:10:56.510560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.858 qpair failed and we were unable to recover it. 00:28:16.858 [2024-04-26 16:10:56.520460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.858 [2024-04-26 16:10:56.520597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.858 [2024-04-26 16:10:56.520619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.858 [2024-04-26 16:10:56.520630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.858 [2024-04-26 16:10:56.520638] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.858 [2024-04-26 16:10:56.520661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.858 qpair failed and we were unable to recover it. 00:28:16.858 [2024-04-26 16:10:56.530361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.858 [2024-04-26 16:10:56.530505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.858 [2024-04-26 16:10:56.530527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.858 [2024-04-26 16:10:56.530539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.858 [2024-04-26 16:10:56.530548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:16.858 [2024-04-26 16:10:56.530569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.858 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.540464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.540604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.540626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.540643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.540652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.540674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.550422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.550705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.550731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.550742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.550752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.550777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.560496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.560655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.560677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.560688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.560696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.560718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.570407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.570548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.570570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.570581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.570590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.570613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.580565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.580705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.580726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.580738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.580746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.580768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.590614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.590776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.590798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.590809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.590821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.590844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.600606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.600742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.600765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.600777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.600787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.600810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.610632] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.610771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.610793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.610805] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.610813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.610836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.620707] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.620844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.620865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.620878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.620887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.620910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.630717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.630888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.630910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.630921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.630930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.118 [2024-04-26 16:10:56.630952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.118 qpair failed and we were unable to recover it. 00:28:17.118 [2024-04-26 16:10:56.640661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.118 [2024-04-26 16:10:56.640837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.118 [2024-04-26 16:10:56.640857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.118 [2024-04-26 16:10:56.640868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.118 [2024-04-26 16:10:56.640877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.640899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.650703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.650868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.650890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.650901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.650909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.650932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.660860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.661032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.661054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.661065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.661082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.661105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.670816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.670957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.670980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.670991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.670999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.671021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.680834] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.680972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.680993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.681004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.681016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.681038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.690811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.690947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.690968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.690980] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.690988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.691010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.700878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.701021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.701042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.701054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.701062] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.701090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.711013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.711167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.711190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.711202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.711211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.711234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.720900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.721050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.721079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.721092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.721102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.721124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.730944] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.731093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.731116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.731127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.731136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.731158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.741035] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.741320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.741343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.741354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.741363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.741384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.751008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.751187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.751209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.751220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.751228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.751250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.761081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.761219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.761241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.761253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.761261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.761283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.771168] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.771448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.771471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.771485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.771494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.771516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.781159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.781326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.781349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.781361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.781371] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.781411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.119 [2024-04-26 16:10:56.791270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.119 [2024-04-26 16:10:56.791408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.119 [2024-04-26 16:10:56.791430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.119 [2024-04-26 16:10:56.791442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.119 [2024-04-26 16:10:56.791451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.119 [2024-04-26 16:10:56.791473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.119 qpair failed and we were unable to recover it. 00:28:17.377 [2024-04-26 16:10:56.801142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.377 [2024-04-26 16:10:56.801283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.377 [2024-04-26 16:10:56.801305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.377 [2024-04-26 16:10:56.801316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.377 [2024-04-26 16:10:56.801325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.377 [2024-04-26 16:10:56.801347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.377 qpair failed and we were unable to recover it. 00:28:17.377 [2024-04-26 16:10:56.811193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.811335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.811356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.811368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.811376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.811399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.821213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.821352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.821374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.821385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.821394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.821415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.831280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.831425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.831446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.831457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.831466] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.831488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.841305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.841484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.841506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.841517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.841527] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.841549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.851431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.851602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.851624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.851634] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.851643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.851665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.861416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.861558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.861584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.861595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.861603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.861625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.871344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.871492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.871514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.871525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.871534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.871556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.881344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.881480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.881502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.881513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.881522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.881544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.891419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.891600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.891622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.891633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.891643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.891665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.901507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.901647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.901668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.901680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.901688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.901713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.911457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.911708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.911731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.911742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.911752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.911773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.921539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.921815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.921838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.921849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.921858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.921880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.931552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.931684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.931705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.931716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.931725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.931747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.941539] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.941709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.941731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.941743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.941751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.941774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.951722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.951861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.951886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.951897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.951906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.378 [2024-04-26 16:10:56.951927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.378 qpair failed and we were unable to recover it. 00:28:17.378 [2024-04-26 16:10:56.961650] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.378 [2024-04-26 16:10:56.961786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.378 [2024-04-26 16:10:56.961808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.378 [2024-04-26 16:10:56.961819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.378 [2024-04-26 16:10:56.961828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:56.961851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:56.971804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:56.971952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:56.971974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:56.971986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:56.971995] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:56.972017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:56.981725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:56.981862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:56.981883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:56.981894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:56.981903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:56.981925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:56.991722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:56.991859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:56.991881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:56.991892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:56.991902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:56.991928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:57.001811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:57.002108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:57.002131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:57.002142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:57.002150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:57.002173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:57.011910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:57.012047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:57.012076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:57.012087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:57.012096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:57.012123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:57.021774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:57.021913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:57.021935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:57.021946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:57.021955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:57.021977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:57.031826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:57.031964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:57.031987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:57.031997] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:57.032006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:57.032027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:57.041954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:57.042102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:57.042125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:57.042136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:57.042145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:57.042168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.379 [2024-04-26 16:10:57.051923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.379 [2024-04-26 16:10:57.052066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.379 [2024-04-26 16:10:57.052096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.379 [2024-04-26 16:10:57.052107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.379 [2024-04-26 16:10:57.052125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.379 [2024-04-26 16:10:57.052148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.379 qpair failed and we were unable to recover it. 00:28:17.639 [2024-04-26 16:10:57.061941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.639 [2024-04-26 16:10:57.062089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.639 [2024-04-26 16:10:57.062112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.639 [2024-04-26 16:10:57.062123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.639 [2024-04-26 16:10:57.062131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.639 [2024-04-26 16:10:57.062153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.639 qpair failed and we were unable to recover it. 00:28:17.639 [2024-04-26 16:10:57.072018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.639 [2024-04-26 16:10:57.072176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.639 [2024-04-26 16:10:57.072198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.639 [2024-04-26 16:10:57.072210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.639 [2024-04-26 16:10:57.072218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.639 [2024-04-26 16:10:57.072241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.639 qpair failed and we were unable to recover it. 00:28:17.639 [2024-04-26 16:10:57.082015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.639 [2024-04-26 16:10:57.082159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.639 [2024-04-26 16:10:57.082183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.639 [2024-04-26 16:10:57.082195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.639 [2024-04-26 16:10:57.082207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.639 [2024-04-26 16:10:57.082230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.639 qpair failed and we were unable to recover it. 00:28:17.639 [2024-04-26 16:10:57.092131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.639 [2024-04-26 16:10:57.092265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.092287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.092298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.092307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.092330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.101994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.102139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.102162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.102173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.102182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.102204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.112158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.112296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.112318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.112330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.112339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.112361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.122081] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.122241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.122263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.122274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.122283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.122304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.132251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.132399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.132421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.132432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.132441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.132463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.142113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.142251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.142274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.142285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.142295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.142317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.152215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.152361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.152386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.152397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.152406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.152429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.162226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.162527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.162550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.162561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.162570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.162591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.172223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.172364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.172386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.172400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.172408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.172430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.182311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.182448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.182470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.182482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.182491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.182514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.192305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.192481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.192503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.192514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.192523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.192546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.202352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.202491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.202514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.202525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.202534] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.640 [2024-04-26 16:10:57.202557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.640 qpair failed and we were unable to recover it. 00:28:17.640 [2024-04-26 16:10:57.212344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.640 [2024-04-26 16:10:57.212522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.640 [2024-04-26 16:10:57.212545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.640 [2024-04-26 16:10:57.212557] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.640 [2024-04-26 16:10:57.212566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.212588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.222334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.222480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.222502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.222513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.222522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.222544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.232432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.232572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.232594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.232605] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.232613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.232635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.242461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.242601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.242623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.242635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.242643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.242670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.252506] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.252641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.252663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.252675] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.252684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.252706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.262508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.262653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.262679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.262690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.262699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.262721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.272479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.272633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.272655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.272666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.272675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.272697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.282615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.282759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.282781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.282792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.282800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.282822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.292621] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.292760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.292781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.292793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.292801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.292824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.302623] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.302791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.302813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.302824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.302833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.302859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.641 [2024-04-26 16:10:57.312648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.641 [2024-04-26 16:10:57.312787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.641 [2024-04-26 16:10:57.312814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.641 [2024-04-26 16:10:57.312826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.641 [2024-04-26 16:10:57.312835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.641 [2024-04-26 16:10:57.312857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.641 qpair failed and we were unable to recover it. 00:28:17.901 [2024-04-26 16:10:57.322694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.901 [2024-04-26 16:10:57.322845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.901 [2024-04-26 16:10:57.322867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.901 [2024-04-26 16:10:57.322878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.901 [2024-04-26 16:10:57.322886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.901 [2024-04-26 16:10:57.322909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.901 qpair failed and we were unable to recover it. 00:28:17.901 [2024-04-26 16:10:57.332700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.901 [2024-04-26 16:10:57.332835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.901 [2024-04-26 16:10:57.332856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.901 [2024-04-26 16:10:57.332867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.901 [2024-04-26 16:10:57.332876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.901 [2024-04-26 16:10:57.332898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.901 qpair failed and we were unable to recover it. 00:28:17.901 [2024-04-26 16:10:57.342745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.901 [2024-04-26 16:10:57.342880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.901 [2024-04-26 16:10:57.342902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.901 [2024-04-26 16:10:57.342914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.901 [2024-04-26 16:10:57.342923] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.901 [2024-04-26 16:10:57.342945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.901 qpair failed and we were unable to recover it. 00:28:17.901 [2024-04-26 16:10:57.352794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.901 [2024-04-26 16:10:57.352932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.901 [2024-04-26 16:10:57.352957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.901 [2024-04-26 16:10:57.352969] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.901 [2024-04-26 16:10:57.352978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.901 [2024-04-26 16:10:57.353000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.901 qpair failed and we were unable to recover it. 00:28:17.901 [2024-04-26 16:10:57.362848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.901 [2024-04-26 16:10:57.362987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.901 [2024-04-26 16:10:57.363009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.901 [2024-04-26 16:10:57.363020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.901 [2024-04-26 16:10:57.363029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.901 [2024-04-26 16:10:57.363051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.901 qpair failed and we were unable to recover it. 00:28:17.901 [2024-04-26 16:10:57.372836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.901 [2024-04-26 16:10:57.372986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.901 [2024-04-26 16:10:57.373008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.901 [2024-04-26 16:10:57.373019] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.901 [2024-04-26 16:10:57.373028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.901 [2024-04-26 16:10:57.373051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.901 qpair failed and we were unable to recover it. 00:28:17.901 [2024-04-26 16:10:57.382896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.901 [2024-04-26 16:10:57.383218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.901 [2024-04-26 16:10:57.383241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.901 [2024-04-26 16:10:57.383252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.901 [2024-04-26 16:10:57.383262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.901 [2024-04-26 16:10:57.383285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.901 qpair failed and we were unable to recover it. 00:28:17.901 [2024-04-26 16:10:57.392900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.393092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.393114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.393125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.393134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.393160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.403095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.403232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.403254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.403265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.403273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.403295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.412963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.413102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.413124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.413136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.413145] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.413167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.422995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.423141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.423163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.423174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.423183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.423204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.432987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.433137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.433159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.433170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.433179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.433201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.442975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.443119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.443143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.443155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.443163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.443185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.453118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.453260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.453283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.453294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.453303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.453325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.463102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.463240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.463262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.463273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.463282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.463303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.473106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.473262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.473284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.473295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.473303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.473330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.483126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.483417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.483439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.483450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.483463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.483484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.493231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.493370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.493396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.493409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.493418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.493440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.503203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.902 [2024-04-26 16:10:57.503342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.902 [2024-04-26 16:10:57.503364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.902 [2024-04-26 16:10:57.503375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.902 [2024-04-26 16:10:57.503384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.902 [2024-04-26 16:10:57.503405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.902 qpair failed and we were unable to recover it. 00:28:17.902 [2024-04-26 16:10:57.513221] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.903 [2024-04-26 16:10:57.513395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.903 [2024-04-26 16:10:57.513417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.903 [2024-04-26 16:10:57.513429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.903 [2024-04-26 16:10:57.513438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.903 [2024-04-26 16:10:57.513459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.903 qpair failed and we were unable to recover it. 00:28:17.903 [2024-04-26 16:10:57.523305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.903 [2024-04-26 16:10:57.523452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.903 [2024-04-26 16:10:57.523473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.903 [2024-04-26 16:10:57.523484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.903 [2024-04-26 16:10:57.523493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.903 [2024-04-26 16:10:57.523515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.903 qpair failed and we were unable to recover it. 00:28:17.903 [2024-04-26 16:10:57.533267] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.903 [2024-04-26 16:10:57.533410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.903 [2024-04-26 16:10:57.533432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.903 [2024-04-26 16:10:57.533443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.903 [2024-04-26 16:10:57.533451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.903 [2024-04-26 16:10:57.533473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.903 qpair failed and we were unable to recover it. 00:28:17.903 [2024-04-26 16:10:57.543277] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.903 [2024-04-26 16:10:57.543413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.903 [2024-04-26 16:10:57.543435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.903 [2024-04-26 16:10:57.543446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.903 [2024-04-26 16:10:57.543455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.903 [2024-04-26 16:10:57.543477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.903 qpair failed and we were unable to recover it. 00:28:17.903 [2024-04-26 16:10:57.553328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.903 [2024-04-26 16:10:57.553464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.903 [2024-04-26 16:10:57.553485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.903 [2024-04-26 16:10:57.553496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.903 [2024-04-26 16:10:57.553504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.903 [2024-04-26 16:10:57.553527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.903 qpair failed and we were unable to recover it. 00:28:17.903 [2024-04-26 16:10:57.563369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.903 [2024-04-26 16:10:57.563507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.903 [2024-04-26 16:10:57.563529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.903 [2024-04-26 16:10:57.563540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.903 [2024-04-26 16:10:57.563548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.903 [2024-04-26 16:10:57.563576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.903 qpair failed and we were unable to recover it. 00:28:17.903 [2024-04-26 16:10:57.573558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.903 [2024-04-26 16:10:57.573707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.903 [2024-04-26 16:10:57.573729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.903 [2024-04-26 16:10:57.573744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.903 [2024-04-26 16:10:57.573752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:17.903 [2024-04-26 16:10:57.573774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.903 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.583599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.583737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.583759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.583770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.583779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.583801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.593420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.593577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.593598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.593610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.593618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.593641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.603572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.603719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.603741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.603752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.603761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.603782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.613569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.613712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.613733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.613744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.613753] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.613775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.623595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.623729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.623750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.623761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.623770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.623792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.633642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.633790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.633811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.633823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.633831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.633853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.643601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.643741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.643763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.643773] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.643782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.643803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.653680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.653818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.653840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.653852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.653861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.653882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.663690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.663854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.663878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.663893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.663903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.663925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.673708] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.673879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.673901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.673912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.673921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.673944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.683815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.162 [2024-04-26 16:10:57.683951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.162 [2024-04-26 16:10:57.683973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.162 [2024-04-26 16:10:57.683984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.162 [2024-04-26 16:10:57.683993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.162 [2024-04-26 16:10:57.684015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.162 qpair failed and we were unable to recover it. 00:28:18.162 [2024-04-26 16:10:57.693742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.693895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.693917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.693928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.693937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.693959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.703737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.703874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.703897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.703908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.703917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.703943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.713780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.713971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.713994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.714006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.714015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.714037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.723869] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.724007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.724029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.724040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.724049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.724075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.733866] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.734027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.734050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.734061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.734075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.734099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.743837] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.744017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.744039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.744050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.744058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.744085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.753917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.754058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.754092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.754103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.754112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.754134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.763994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.764146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.764169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.764180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.764189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.764211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.774036] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.774175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.774198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.774210] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.774218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.774241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.783995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.784142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.784165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.784176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.784185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.784207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.794032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.794189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.794211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.794223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.794232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.794258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.804114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.163 [2024-04-26 16:10:57.804280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.163 [2024-04-26 16:10:57.804304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.163 [2024-04-26 16:10:57.804316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.163 [2024-04-26 16:10:57.804326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.163 [2024-04-26 16:10:57.804348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.163 qpair failed and we were unable to recover it. 00:28:18.163 [2024-04-26 16:10:57.814085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.164 [2024-04-26 16:10:57.814268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.164 [2024-04-26 16:10:57.814291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.164 [2024-04-26 16:10:57.814302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.164 [2024-04-26 16:10:57.814312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.164 [2024-04-26 16:10:57.814334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.164 qpair failed and we were unable to recover it. 00:28:18.164 [2024-04-26 16:10:57.824192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.164 [2024-04-26 16:10:57.824328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.164 [2024-04-26 16:10:57.824351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.164 [2024-04-26 16:10:57.824370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.164 [2024-04-26 16:10:57.824379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.164 [2024-04-26 16:10:57.824401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.164 qpair failed and we were unable to recover it. 00:28:18.164 [2024-04-26 16:10:57.834210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.164 [2024-04-26 16:10:57.834352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.164 [2024-04-26 16:10:57.834374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.164 [2024-04-26 16:10:57.834386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.164 [2024-04-26 16:10:57.834394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.164 [2024-04-26 16:10:57.834416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.164 qpair failed and we were unable to recover it. 00:28:18.424 [2024-04-26 16:10:57.844196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.424 [2024-04-26 16:10:57.844330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.424 [2024-04-26 16:10:57.844355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.424 [2024-04-26 16:10:57.844367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.424 [2024-04-26 16:10:57.844376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.424 [2024-04-26 16:10:57.844398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.424 qpair failed and we were unable to recover it. 00:28:18.424 [2024-04-26 16:10:57.854236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.424 [2024-04-26 16:10:57.854419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.424 [2024-04-26 16:10:57.854441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.424 [2024-04-26 16:10:57.854454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.424 [2024-04-26 16:10:57.854463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.424 [2024-04-26 16:10:57.854485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.864341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.864509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.864531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.864542] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.864551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.864573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.874242] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.874383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.874407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.874419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.874429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.874451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.884314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.884460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.884482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.884493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.884505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.884527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.894292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.894426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.894447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.894458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.894467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.894488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.904402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.904578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.904600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.904611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.904620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.904642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.914352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.914490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.914513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.914524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.914533] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.914555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.924467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.924627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.924649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.924660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.924669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.924691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.934471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.934614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.934637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.934648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.934657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.934683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.944511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.944678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.944701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.944711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.944720] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.944743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.954495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.954628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.954650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.954661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.954670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.954691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.964558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.964738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.964760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.964771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.964781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.964803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.974592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.974745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.425 [2024-04-26 16:10:57.974767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.425 [2024-04-26 16:10:57.974782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.425 [2024-04-26 16:10:57.974790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.425 [2024-04-26 16:10:57.974813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.425 qpair failed and we were unable to recover it. 00:28:18.425 [2024-04-26 16:10:57.984611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.425 [2024-04-26 16:10:57.984749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:57.984771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:57.984782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:57.984790] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:57.984812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:57.994712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:57.994853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:57.994875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:57.994886] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:57.994895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:57.994917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.004984] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.005133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.005155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.005167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.005176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.005198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.014705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.014841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.014863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.014875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.014883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.014905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.024749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.024887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.024909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.024921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.024929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.024951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.034819] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.034961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.034984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.034996] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.035005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.035028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.044814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.044954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.044978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.044990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.044999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.045022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.054811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.054948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.054970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.054981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.054990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.055012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.064878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.065055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.065082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.065096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.065105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.065128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.074793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.074983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.075007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.075018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.075027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.075049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.084942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.085083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.085106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.085118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.085126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.085148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.094961] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.095100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.095123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.095134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.426 [2024-04-26 16:10:58.095143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.426 [2024-04-26 16:10:58.095165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.426 qpair failed and we were unable to recover it. 00:28:18.426 [2024-04-26 16:10:58.104985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.426 [2024-04-26 16:10:58.105166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.426 [2024-04-26 16:10:58.105188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.426 [2024-04-26 16:10:58.105199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.427 [2024-04-26 16:10:58.105208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.427 [2024-04-26 16:10:58.105231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.427 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.115013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.115303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.115326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.115337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.115346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.115369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.125051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.125191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.125213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.125224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.125233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.125255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.135024] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.135299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.135322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.135333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.135343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.135364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.145085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.145227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.145248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.145260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.145269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.145291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.155095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.155235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.155261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.155273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.155281] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.155303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.165209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.165346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.165368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.165380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.165388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.165414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.175209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.175374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.175396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.175408] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.175417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.175440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.185178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.185361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.185383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.185395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.185404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.185426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.195266] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.195402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.195424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.195435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.195444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.195469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.205325] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.205466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.205488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.205499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.205508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.205529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.215219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.215350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.687 [2024-04-26 16:10:58.215372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.687 [2024-04-26 16:10:58.215383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.687 [2024-04-26 16:10:58.215391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.687 [2024-04-26 16:10:58.215413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.687 qpair failed and we were unable to recover it. 00:28:18.687 [2024-04-26 16:10:58.225343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.687 [2024-04-26 16:10:58.225513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.225534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.225546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.225555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.225578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.235357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.235528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.235552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.235564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.235574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.235596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.245393] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.245527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.245552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.245564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.245572] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.245594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.255370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.255549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.255573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.255585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.255595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.255617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.265449] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.265632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.265655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.265666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.265675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.265697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.275542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.275709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.275731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.275743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.275751] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.275773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.285444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.285589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.285612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.285623] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.285635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.285657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.295524] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.295669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.295691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.295702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.295711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.295732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.305569] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.305717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.305739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.305750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.305759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.305781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.315583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.315723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.315746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.315757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.315766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.315787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.325632] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.325772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.325796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.325807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.325817] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.325840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.335638] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.335778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.335801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.335812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.335827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.688 [2024-04-26 16:10:58.335850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.688 qpair failed and we were unable to recover it. 00:28:18.688 [2024-04-26 16:10:58.345674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.688 [2024-04-26 16:10:58.345815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.688 [2024-04-26 16:10:58.345837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.688 [2024-04-26 16:10:58.345848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.688 [2024-04-26 16:10:58.345857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.689 [2024-04-26 16:10:58.345878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.689 qpair failed and we were unable to recover it. 00:28:18.689 [2024-04-26 16:10:58.355660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.689 [2024-04-26 16:10:58.355803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.689 [2024-04-26 16:10:58.355825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.689 [2024-04-26 16:10:58.355837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.689 [2024-04-26 16:10:58.355845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.689 [2024-04-26 16:10:58.355867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.689 qpair failed and we were unable to recover it. 00:28:18.689 [2024-04-26 16:10:58.365725] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.689 [2024-04-26 16:10:58.365865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.689 [2024-04-26 16:10:58.365887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.689 [2024-04-26 16:10:58.365898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.689 [2024-04-26 16:10:58.365907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.689 [2024-04-26 16:10:58.365929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.689 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.375674] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.375807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.375830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.948 [2024-04-26 16:10:58.375841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.948 [2024-04-26 16:10:58.375853] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.948 [2024-04-26 16:10:58.375875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.385744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.385919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.385941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.948 [2024-04-26 16:10:58.385952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.948 [2024-04-26 16:10:58.385961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.948 [2024-04-26 16:10:58.385984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.395763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.395900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.395922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.948 [2024-04-26 16:10:58.395933] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.948 [2024-04-26 16:10:58.395942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.948 [2024-04-26 16:10:58.395969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.405802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.405935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.405957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.948 [2024-04-26 16:10:58.405968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.948 [2024-04-26 16:10:58.405977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.948 [2024-04-26 16:10:58.405999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.415814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.415952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.415977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.948 [2024-04-26 16:10:58.415994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.948 [2024-04-26 16:10:58.416005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000010040 00:28:18.948 [2024-04-26 16:10:58.416029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.425903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.426053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.426087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.948 [2024-04-26 16:10:58.426101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.948 [2024-04-26 16:10:58.426110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:18.948 [2024-04-26 16:10:58.426137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.435989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.436130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.436153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.948 [2024-04-26 16:10:58.436165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.948 [2024-04-26 16:10:58.436174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000020040 00:28:18.948 [2024-04-26 16:10:58.436198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.445987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.446178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.446213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.948 [2024-04-26 16:10:58.446231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.948 [2024-04-26 16:10:58.446244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000030040 00:28:18.948 [2024-04-26 16:10:58.446277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:18.948 qpair failed and we were unable to recover it. 00:28:18.948 [2024-04-26 16:10:58.455959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.948 [2024-04-26 16:10:58.456102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.948 [2024-04-26 16:10:58.456127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.949 [2024-04-26 16:10:58.456138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.949 [2024-04-26 16:10:58.456146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000030040 00:28:18.949 [2024-04-26 16:10:58.456169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-26 16:10:58.466039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.949 [2024-04-26 16:10:58.466274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.949 [2024-04-26 16:10:58.466312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.949 [2024-04-26 16:10:58.466336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.949 [2024-04-26 16:10:58.466350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:18.949 [2024-04-26 16:10:58.466384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-26 16:10:58.476053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.949 [2024-04-26 16:10:58.476200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.949 [2024-04-26 16:10:58.476225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.949 [2024-04-26 16:10:58.476247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.949 [2024-04-26 16:10:58.476256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x614000002440 00:28:18.949 [2024-04-26 16:10:58.476282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.949 qpair failed and we were unable to recover it. 00:28:18.949 [2024-04-26 16:10:58.476800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:28:18.949 [2024-04-26 16:10:58.477399] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:28:18.949 Initializing NVMe Controllers 00:28:18.949 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:18.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:18.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:18.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:18.949 Initialization complete. Launching workers. 00:28:18.949 Starting thread on core 1 00:28:18.949 Starting thread on core 2 00:28:18.949 Starting thread on core 3 00:28:18.949 Starting thread on core 0 00:28:18.949 16:10:58 -- host/target_disconnect.sh@59 -- # sync 00:28:18.949 00:28:18.949 real 0m11.353s 00:28:18.949 user 0m20.414s 00:28:18.949 sys 0m4.209s 00:28:18.949 16:10:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:18.949 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:28:18.949 ************************************ 00:28:18.949 END TEST nvmf_target_disconnect_tc2 00:28:18.949 ************************************ 00:28:18.949 16:10:58 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:28:18.949 16:10:58 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:18.949 16:10:58 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:28:18.949 16:10:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:18.949 16:10:58 -- nvmf/common.sh@117 -- # sync 00:28:18.949 16:10:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:18.949 16:10:58 -- nvmf/common.sh@120 -- # set +e 00:28:18.949 16:10:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:18.949 16:10:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:18.949 rmmod nvme_tcp 00:28:18.949 rmmod nvme_fabrics 00:28:18.949 rmmod nvme_keyring 00:28:18.949 16:10:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:18.949 16:10:58 -- nvmf/common.sh@124 -- # set -e 00:28:18.949 16:10:58 -- nvmf/common.sh@125 -- # return 0 00:28:18.949 16:10:58 -- nvmf/common.sh@478 -- # '[' -n 2606245 ']' 00:28:18.949 16:10:58 -- nvmf/common.sh@479 -- # killprocess 2606245 00:28:18.949 16:10:58 -- common/autotest_common.sh@936 -- # '[' -z 2606245 ']' 00:28:18.949 16:10:58 -- common/autotest_common.sh@940 -- # kill -0 2606245 00:28:18.949 16:10:58 -- common/autotest_common.sh@941 -- # uname 00:28:18.949 16:10:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:18.949 16:10:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2606245 00:28:19.207 16:10:58 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:28:19.207 16:10:58 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:28:19.207 16:10:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2606245' 00:28:19.207 killing process with pid 2606245 00:28:19.207 16:10:58 -- common/autotest_common.sh@955 -- # kill 2606245 00:28:19.207 16:10:58 -- common/autotest_common.sh@960 -- # wait 2606245 00:28:20.581 16:11:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:20.581 16:11:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:20.581 16:11:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:20.581 16:11:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.581 16:11:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.581 16:11:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.581 16:11:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.581 16:11:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.114 16:11:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:23.114 00:28:23.114 real 0m21.047s 00:28:23.114 user 0m50.795s 00:28:23.114 sys 0m9.005s 00:28:23.114 16:11:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:23.114 16:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:23.114 ************************************ 00:28:23.114 END TEST nvmf_target_disconnect 00:28:23.114 ************************************ 00:28:23.114 16:11:02 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:28:23.114 16:11:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:23.114 16:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:23.114 16:11:02 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:28:23.114 00:28:23.114 real 20m26.103s 00:28:23.114 user 44m5.881s 00:28:23.114 sys 5m56.275s 00:28:23.114 16:11:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:23.114 16:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:23.114 ************************************ 00:28:23.114 END TEST nvmf_tcp 00:28:23.114 ************************************ 00:28:23.114 16:11:02 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:28:23.114 16:11:02 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:23.114 16:11:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:23.114 16:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:23.114 16:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:23.114 ************************************ 00:28:23.114 START TEST spdkcli_nvmf_tcp 00:28:23.114 ************************************ 00:28:23.114 16:11:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:23.114 * Looking for test storage... 00:28:23.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:23.114 16:11:02 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:23.114 16:11:02 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:23.114 16:11:02 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:23.114 16:11:02 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.114 16:11:02 -- nvmf/common.sh@7 -- # uname -s 00:28:23.114 16:11:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.114 16:11:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.114 16:11:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.114 16:11:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.114 16:11:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.114 16:11:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.114 16:11:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.114 16:11:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.114 16:11:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.114 16:11:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.114 16:11:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:23.114 16:11:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:23.114 16:11:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.114 16:11:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.114 16:11:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.114 16:11:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.114 16:11:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.114 16:11:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.114 16:11:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.114 16:11:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.114 16:11:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.114 16:11:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.114 16:11:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.114 16:11:02 -- paths/export.sh@5 -- # export PATH 00:28:23.115 16:11:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.115 16:11:02 -- nvmf/common.sh@47 -- # : 0 00:28:23.115 16:11:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:23.115 16:11:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:23.115 16:11:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.115 16:11:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.115 16:11:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.115 16:11:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:23.115 16:11:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:23.115 16:11:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:23.115 16:11:02 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:23.115 16:11:02 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:23.115 16:11:02 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:23.115 16:11:02 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:23.115 16:11:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:23.115 16:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:23.115 16:11:02 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:23.115 16:11:02 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2608050 00:28:23.115 16:11:02 -- spdkcli/common.sh@34 -- # waitforlisten 2608050 00:28:23.115 16:11:02 -- common/autotest_common.sh@817 -- # '[' -z 2608050 ']' 00:28:23.115 16:11:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.115 16:11:02 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:23.115 16:11:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:23.115 16:11:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.115 16:11:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:23.115 16:11:02 -- common/autotest_common.sh@10 -- # set +x 00:28:23.115 [2024-04-26 16:11:02.686137] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:23.115 [2024-04-26 16:11:02.686227] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608050 ] 00:28:23.115 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.115 [2024-04-26 16:11:02.789232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:23.374 [2024-04-26 16:11:03.007419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.374 [2024-04-26 16:11:03.007430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.940 16:11:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:23.940 16:11:03 -- common/autotest_common.sh@850 -- # return 0 00:28:23.940 16:11:03 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:23.940 16:11:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:23.940 16:11:03 -- common/autotest_common.sh@10 -- # set +x 00:28:23.940 16:11:03 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:23.940 16:11:03 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:23.940 16:11:03 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:23.940 16:11:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:23.940 16:11:03 -- common/autotest_common.sh@10 -- # set +x 00:28:23.941 16:11:03 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:23.941 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:23.941 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:23.941 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:23.941 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:23.941 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:23.941 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:23.941 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:23.941 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:23.941 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:23.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:23.941 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:23.941 ' 00:28:24.198 [2024-04-26 16:11:03.823595] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:26.725 [2024-04-26 16:11:06.082534] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.658 [2024-04-26 16:11:07.258549] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:30.183 [2024-04-26 16:11:09.421361] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:32.083 [2024-04-26 16:11:11.283536] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:33.453 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:33.453 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:33.453 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:33.453 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:33.453 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:33.453 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:33.453 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:33.453 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:33.453 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:33.453 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:33.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:33.453 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:33.453 16:11:12 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:33.453 16:11:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:33.453 16:11:12 -- common/autotest_common.sh@10 -- # set +x 00:28:33.453 16:11:12 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:33.453 16:11:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:33.453 16:11:12 -- common/autotest_common.sh@10 -- # set +x 00:28:33.453 16:11:12 -- spdkcli/nvmf.sh@69 -- # check_match 00:28:33.453 16:11:12 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:33.712 16:11:13 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:33.712 16:11:13 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:33.712 16:11:13 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:33.712 16:11:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:33.712 16:11:13 -- common/autotest_common.sh@10 -- # set +x 00:28:33.712 16:11:13 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:33.712 16:11:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:33.712 16:11:13 -- common/autotest_common.sh@10 -- # set +x 00:28:33.712 16:11:13 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:33.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:33.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:33.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:33.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:33.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:33.712 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:33.712 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:33.712 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:33.712 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:33.712 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:33.712 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:33.712 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:33.712 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:33.712 ' 00:28:39.050 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:39.050 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:39.050 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:39.050 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:39.050 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:39.050 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:39.050 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:39.050 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:39.050 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:39.050 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:39.050 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:39.050 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:39.050 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:39.050 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:39.308 16:11:18 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:39.308 16:11:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:39.308 16:11:18 -- common/autotest_common.sh@10 -- # set +x 00:28:39.308 16:11:18 -- spdkcli/nvmf.sh@90 -- # killprocess 2608050 00:28:39.308 16:11:18 -- common/autotest_common.sh@936 -- # '[' -z 2608050 ']' 00:28:39.308 16:11:18 -- common/autotest_common.sh@940 -- # kill -0 2608050 00:28:39.308 16:11:18 -- common/autotest_common.sh@941 -- # uname 00:28:39.308 16:11:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:39.308 16:11:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2608050 00:28:39.308 16:11:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:39.308 16:11:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:39.308 16:11:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2608050' 00:28:39.308 killing process with pid 2608050 00:28:39.308 16:11:18 -- common/autotest_common.sh@955 -- # kill 2608050 00:28:39.308 [2024-04-26 16:11:18.847609] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:39.308 16:11:18 -- common/autotest_common.sh@960 -- # wait 2608050 00:28:40.682 16:11:20 -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:40.682 16:11:20 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:40.682 16:11:20 -- spdkcli/common.sh@13 -- # '[' -n 2608050 ']' 00:28:40.682 16:11:20 -- spdkcli/common.sh@14 -- # killprocess 2608050 00:28:40.682 16:11:20 -- common/autotest_common.sh@936 -- # '[' -z 2608050 ']' 00:28:40.682 16:11:20 -- common/autotest_common.sh@940 -- # kill -0 2608050 00:28:40.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2608050) - No such process 00:28:40.682 16:11:20 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2608050 is not found' 00:28:40.682 Process with pid 2608050 is not found 00:28:40.682 16:11:20 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:40.682 16:11:20 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:40.682 16:11:20 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:40.682 00:28:40.682 real 0m17.649s 00:28:40.683 user 0m35.619s 00:28:40.683 sys 0m0.893s 00:28:40.683 16:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:40.683 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:28:40.683 ************************************ 00:28:40.683 END TEST spdkcli_nvmf_tcp 00:28:40.683 ************************************ 00:28:40.683 16:11:20 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:40.683 16:11:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:40.683 16:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:40.683 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:28:40.683 ************************************ 00:28:40.683 START TEST nvmf_identify_passthru 00:28:40.683 ************************************ 00:28:40.683 16:11:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:40.941 * Looking for test storage... 00:28:40.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:40.941 16:11:20 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.941 16:11:20 -- nvmf/common.sh@7 -- # uname -s 00:28:40.941 16:11:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.941 16:11:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.941 16:11:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.941 16:11:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.941 16:11:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.941 16:11:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.941 16:11:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.941 16:11:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.941 16:11:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.941 16:11:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.941 16:11:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:40.941 16:11:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:40.941 16:11:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.941 16:11:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.941 16:11:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.941 16:11:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.941 16:11:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.941 16:11:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.941 16:11:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.941 16:11:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.941 16:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.941 16:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.941 16:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.941 16:11:20 -- paths/export.sh@5 -- # export PATH 00:28:40.941 16:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.941 16:11:20 -- nvmf/common.sh@47 -- # : 0 00:28:40.941 16:11:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:40.941 16:11:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:40.941 16:11:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.941 16:11:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.941 16:11:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.941 16:11:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:40.941 16:11:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:40.941 16:11:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:40.941 16:11:20 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.941 16:11:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.941 16:11:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.941 16:11:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.941 16:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.941 16:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.941 16:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.941 16:11:20 -- paths/export.sh@5 -- # export PATH 00:28:40.941 16:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.941 16:11:20 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:40.941 16:11:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:40.941 16:11:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.941 16:11:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:40.941 16:11:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:40.941 16:11:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:40.941 16:11:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.941 16:11:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:40.941 16:11:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.941 16:11:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:40.941 16:11:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:40.941 16:11:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:40.941 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.249 16:11:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:46.249 16:11:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:46.249 16:11:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:46.249 16:11:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:46.249 16:11:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:46.249 16:11:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:46.249 16:11:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:46.249 16:11:25 -- nvmf/common.sh@295 -- # net_devs=() 00:28:46.249 16:11:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:46.249 16:11:25 -- nvmf/common.sh@296 -- # e810=() 00:28:46.249 16:11:25 -- nvmf/common.sh@296 -- # local -ga e810 00:28:46.249 16:11:25 -- nvmf/common.sh@297 -- # x722=() 00:28:46.249 16:11:25 -- nvmf/common.sh@297 -- # local -ga x722 00:28:46.249 16:11:25 -- nvmf/common.sh@298 -- # mlx=() 00:28:46.249 16:11:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:46.249 16:11:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.249 16:11:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:46.249 16:11:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:46.249 16:11:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:46.249 16:11:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:46.249 16:11:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:46.249 16:11:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:46.249 16:11:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.249 16:11:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:46.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:46.249 16:11:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.249 16:11:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.249 16:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.249 16:11:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.249 16:11:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.249 16:11:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.249 16:11:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:46.249 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:46.250 16:11:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:46.250 16:11:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.250 16:11:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.250 16:11:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:46.250 16:11:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.250 16:11:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:46.250 Found net devices under 0000:86:00.0: cvl_0_0 00:28:46.250 16:11:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.250 16:11:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.250 16:11:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.250 16:11:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:46.250 16:11:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.250 16:11:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:46.250 Found net devices under 0000:86:00.1: cvl_0_1 00:28:46.250 16:11:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.250 16:11:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:46.250 16:11:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:46.250 16:11:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:46.250 16:11:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.250 16:11:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.250 16:11:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.250 16:11:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:46.250 16:11:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.250 16:11:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.250 16:11:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:46.250 16:11:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.250 16:11:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.250 16:11:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:46.250 16:11:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:46.250 16:11:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.250 16:11:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.250 16:11:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.250 16:11:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.250 16:11:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:46.250 16:11:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.250 16:11:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.250 16:11:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.250 16:11:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:28:46.250 00:28:46.250 --- 10.0.0.2 ping statistics --- 00:28:46.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.250 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:46.250 16:11:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:28:46.250 00:28:46.250 --- 10.0.0.1 ping statistics --- 00:28:46.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.250 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:28:46.250 16:11:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.250 16:11:25 -- nvmf/common.sh@411 -- # return 0 00:28:46.250 16:11:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:46.250 16:11:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.250 16:11:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:46.250 16:11:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.250 16:11:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:46.250 16:11:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:46.250 16:11:25 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:46.250 16:11:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:46.250 16:11:25 -- common/autotest_common.sh@10 -- # set +x 00:28:46.250 16:11:25 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:46.250 16:11:25 -- common/autotest_common.sh@1510 -- # bdfs=() 00:28:46.250 16:11:25 -- common/autotest_common.sh@1510 -- # local bdfs 00:28:46.250 16:11:25 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:28:46.250 16:11:25 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:28:46.250 16:11:25 -- common/autotest_common.sh@1499 -- # bdfs=() 00:28:46.250 16:11:25 -- common/autotest_common.sh@1499 -- # local bdfs 00:28:46.250 16:11:25 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:46.250 16:11:25 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:46.250 16:11:25 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:28:46.250 16:11:25 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:28:46.250 16:11:25 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:28:46.250 16:11:25 -- common/autotest_common.sh@1513 -- # echo 0000:5e:00.0 00:28:46.250 16:11:25 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:28:46.250 16:11:25 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:28:46.250 16:11:25 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:46.250 16:11:25 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:46.250 16:11:25 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:46.250 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.432 16:11:30 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:28:50.432 16:11:30 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:50.432 16:11:30 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:50.432 16:11:30 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:50.690 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.870 16:11:34 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:54.870 16:11:34 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:54.870 16:11:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:54.870 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:28:54.870 16:11:34 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:54.870 16:11:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:54.870 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:28:54.870 16:11:34 -- target/identify_passthru.sh@31 -- # nvmfpid=2615336 00:28:54.870 16:11:34 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:54.870 16:11:34 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:54.870 16:11:34 -- target/identify_passthru.sh@35 -- # waitforlisten 2615336 00:28:54.870 16:11:34 -- common/autotest_common.sh@817 -- # '[' -z 2615336 ']' 00:28:54.870 16:11:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.870 16:11:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:54.870 16:11:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.870 16:11:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:54.870 16:11:34 -- common/autotest_common.sh@10 -- # set +x 00:28:54.870 [2024-04-26 16:11:34.373004] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:54.870 [2024-04-26 16:11:34.373107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.870 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.870 [2024-04-26 16:11:34.485178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.128 [2024-04-26 16:11:34.703822] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.128 [2024-04-26 16:11:34.703871] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.128 [2024-04-26 16:11:34.703881] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.128 [2024-04-26 16:11:34.703890] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.128 [2024-04-26 16:11:34.703897] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.128 [2024-04-26 16:11:34.704018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.128 [2024-04-26 16:11:34.704115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.128 [2024-04-26 16:11:34.704177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.128 [2024-04-26 16:11:34.704184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.693 16:11:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:55.693 16:11:35 -- common/autotest_common.sh@850 -- # return 0 00:28:55.693 16:11:35 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:55.693 16:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.693 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:28:55.693 INFO: Log level set to 20 00:28:55.693 INFO: Requests: 00:28:55.693 { 00:28:55.693 "jsonrpc": "2.0", 00:28:55.693 "method": "nvmf_set_config", 00:28:55.693 "id": 1, 00:28:55.693 "params": { 00:28:55.693 "admin_cmd_passthru": { 00:28:55.693 "identify_ctrlr": true 00:28:55.693 } 00:28:55.693 } 00:28:55.693 } 00:28:55.693 00:28:55.693 INFO: response: 00:28:55.693 { 00:28:55.693 "jsonrpc": "2.0", 00:28:55.693 "id": 1, 00:28:55.693 "result": true 00:28:55.693 } 00:28:55.693 00:28:55.693 16:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.693 16:11:35 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:55.693 16:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.693 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:28:55.693 INFO: Setting log level to 20 00:28:55.693 INFO: Setting log level to 20 00:28:55.693 INFO: Log level set to 20 00:28:55.693 INFO: Log level set to 20 00:28:55.693 INFO: Requests: 00:28:55.693 { 00:28:55.693 "jsonrpc": "2.0", 00:28:55.693 "method": "framework_start_init", 00:28:55.693 "id": 1 00:28:55.693 } 00:28:55.693 00:28:55.693 INFO: Requests: 00:28:55.693 { 00:28:55.693 "jsonrpc": "2.0", 00:28:55.693 "method": "framework_start_init", 00:28:55.693 "id": 1 00:28:55.693 } 00:28:55.693 00:28:55.951 [2024-04-26 16:11:35.552022] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:55.951 INFO: response: 00:28:55.951 { 00:28:55.951 "jsonrpc": "2.0", 00:28:55.951 "id": 1, 00:28:55.951 "result": true 00:28:55.951 } 00:28:55.951 00:28:55.951 INFO: response: 00:28:55.951 { 00:28:55.951 "jsonrpc": "2.0", 00:28:55.951 "id": 1, 00:28:55.951 "result": true 00:28:55.951 } 00:28:55.951 00:28:55.951 16:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.951 16:11:35 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:55.951 16:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.951 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:28:55.951 INFO: Setting log level to 40 00:28:55.951 INFO: Setting log level to 40 00:28:55.951 INFO: Setting log level to 40 00:28:55.951 [2024-04-26 16:11:35.569919] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.951 16:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.951 16:11:35 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:55.951 16:11:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:55.951 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:28:55.951 16:11:35 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:55.951 16:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.951 16:11:35 -- common/autotest_common.sh@10 -- # set +x 00:28:59.234 Nvme0n1 00:28:59.234 16:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.234 16:11:38 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:59.234 16:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.234 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:28:59.234 16:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.234 16:11:38 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:59.234 16:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.234 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:28:59.234 16:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.234 16:11:38 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.234 16:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.234 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:28:59.234 [2024-04-26 16:11:38.530504] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.234 16:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.234 16:11:38 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:59.234 16:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.234 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:28:59.234 [2024-04-26 16:11:38.538239] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:59.234 [ 00:28:59.234 { 00:28:59.234 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:59.234 "subtype": "Discovery", 00:28:59.234 "listen_addresses": [], 00:28:59.234 "allow_any_host": true, 00:28:59.234 "hosts": [] 00:28:59.234 }, 00:28:59.234 { 00:28:59.234 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.234 "subtype": "NVMe", 00:28:59.234 "listen_addresses": [ 00:28:59.234 { 00:28:59.234 "transport": "TCP", 00:28:59.234 "trtype": "TCP", 00:28:59.234 "adrfam": "IPv4", 00:28:59.234 "traddr": "10.0.0.2", 00:28:59.234 "trsvcid": "4420" 00:28:59.235 } 00:28:59.235 ], 00:28:59.235 "allow_any_host": true, 00:28:59.235 "hosts": [], 00:28:59.235 "serial_number": "SPDK00000000000001", 00:28:59.235 "model_number": "SPDK bdev Controller", 00:28:59.235 "max_namespaces": 1, 00:28:59.235 "min_cntlid": 1, 00:28:59.235 "max_cntlid": 65519, 00:28:59.235 "namespaces": [ 00:28:59.235 { 00:28:59.235 "nsid": 1, 00:28:59.235 "bdev_name": "Nvme0n1", 00:28:59.235 "name": "Nvme0n1", 00:28:59.235 "nguid": "BEEA59261379408DA0883990DCE06377", 00:28:59.235 "uuid": "beea5926-1379-408d-a088-3990dce06377" 00:28:59.235 } 00:28:59.235 ] 00:28:59.235 } 00:28:59.235 ] 00:28:59.235 16:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.235 16:11:38 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:59.235 16:11:38 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:59.235 16:11:38 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:59.235 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.235 16:11:38 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:59.235 16:11:38 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:59.235 16:11:38 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:59.235 16:11:38 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:59.235 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.493 16:11:39 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:59.493 16:11:39 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:59.493 16:11:39 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:59.493 16:11:39 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.493 16:11:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:59.493 16:11:39 -- common/autotest_common.sh@10 -- # set +x 00:28:59.493 16:11:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:59.493 16:11:39 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:59.493 16:11:39 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:59.493 16:11:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:59.493 16:11:39 -- nvmf/common.sh@117 -- # sync 00:28:59.493 16:11:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.493 16:11:39 -- nvmf/common.sh@120 -- # set +e 00:28:59.493 16:11:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.493 16:11:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.493 rmmod nvme_tcp 00:28:59.493 rmmod nvme_fabrics 00:28:59.751 rmmod nvme_keyring 00:28:59.751 16:11:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.751 16:11:39 -- nvmf/common.sh@124 -- # set -e 00:28:59.751 16:11:39 -- nvmf/common.sh@125 -- # return 0 00:28:59.751 16:11:39 -- nvmf/common.sh@478 -- # '[' -n 2615336 ']' 00:28:59.751 16:11:39 -- nvmf/common.sh@479 -- # killprocess 2615336 00:28:59.751 16:11:39 -- common/autotest_common.sh@936 -- # '[' -z 2615336 ']' 00:28:59.751 16:11:39 -- common/autotest_common.sh@940 -- # kill -0 2615336 00:28:59.751 16:11:39 -- common/autotest_common.sh@941 -- # uname 00:28:59.751 16:11:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:59.751 16:11:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2615336 00:28:59.751 16:11:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:59.751 16:11:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:59.751 16:11:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2615336' 00:28:59.751 killing process with pid 2615336 00:28:59.751 16:11:39 -- common/autotest_common.sh@955 -- # kill 2615336 00:28:59.751 [2024-04-26 16:11:39.255320] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:59.751 16:11:39 -- common/autotest_common.sh@960 -- # wait 2615336 00:29:02.304 16:11:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:02.304 16:11:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:02.304 16:11:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:02.304 16:11:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:02.304 16:11:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:02.304 16:11:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.304 16:11:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:02.304 16:11:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.836 16:11:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:04.836 00:29:04.836 real 0m23.624s 00:29:04.836 user 0m34.493s 00:29:04.836 sys 0m5.073s 00:29:04.836 16:11:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:04.836 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:29:04.836 ************************************ 00:29:04.836 END TEST nvmf_identify_passthru 00:29:04.836 ************************************ 00:29:04.836 16:11:43 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:04.836 16:11:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:04.836 16:11:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:04.836 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:29:04.836 ************************************ 00:29:04.836 START TEST nvmf_dif 00:29:04.836 ************************************ 00:29:04.836 16:11:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:04.836 * Looking for test storage... 00:29:04.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:04.836 16:11:44 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.836 16:11:44 -- nvmf/common.sh@7 -- # uname -s 00:29:04.836 16:11:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.836 16:11:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.836 16:11:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.836 16:11:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.836 16:11:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.836 16:11:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.836 16:11:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.836 16:11:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.836 16:11:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.836 16:11:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.836 16:11:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:04.836 16:11:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:04.836 16:11:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.837 16:11:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.837 16:11:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.837 16:11:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.837 16:11:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.837 16:11:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.837 16:11:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.837 16:11:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.837 16:11:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.837 16:11:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.837 16:11:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.837 16:11:44 -- paths/export.sh@5 -- # export PATH 00:29:04.837 16:11:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.837 16:11:44 -- nvmf/common.sh@47 -- # : 0 00:29:04.837 16:11:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:04.837 16:11:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:04.837 16:11:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.837 16:11:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.837 16:11:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.837 16:11:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:04.837 16:11:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:04.837 16:11:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:04.837 16:11:44 -- target/dif.sh@15 -- # NULL_META=16 00:29:04.837 16:11:44 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:04.837 16:11:44 -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:04.837 16:11:44 -- target/dif.sh@15 -- # NULL_DIF=1 00:29:04.837 16:11:44 -- target/dif.sh@135 -- # nvmftestinit 00:29:04.837 16:11:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:04.837 16:11:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.837 16:11:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:04.837 16:11:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:04.837 16:11:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:04.837 16:11:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.837 16:11:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:04.837 16:11:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.837 16:11:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:04.837 16:11:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:04.837 16:11:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:04.837 16:11:44 -- common/autotest_common.sh@10 -- # set +x 00:29:10.104 16:11:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:10.104 16:11:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:10.104 16:11:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:10.104 16:11:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:10.104 16:11:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:10.104 16:11:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:10.104 16:11:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:10.104 16:11:49 -- nvmf/common.sh@295 -- # net_devs=() 00:29:10.104 16:11:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:10.104 16:11:49 -- nvmf/common.sh@296 -- # e810=() 00:29:10.104 16:11:49 -- nvmf/common.sh@296 -- # local -ga e810 00:29:10.104 16:11:49 -- nvmf/common.sh@297 -- # x722=() 00:29:10.104 16:11:49 -- nvmf/common.sh@297 -- # local -ga x722 00:29:10.104 16:11:49 -- nvmf/common.sh@298 -- # mlx=() 00:29:10.104 16:11:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:10.104 16:11:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.104 16:11:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:10.104 16:11:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:10.104 16:11:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:10.104 16:11:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.104 16:11:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:10.104 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:10.104 16:11:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:10.104 16:11:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:10.104 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:10.104 16:11:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:10.104 16:11:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:10.104 16:11:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.104 16:11:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.104 16:11:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:10.104 16:11:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.104 16:11:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:10.104 Found net devices under 0000:86:00.0: cvl_0_0 00:29:10.104 16:11:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.104 16:11:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:10.104 16:11:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.104 16:11:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:10.104 16:11:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.104 16:11:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:10.104 Found net devices under 0000:86:00.1: cvl_0_1 00:29:10.104 16:11:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.104 16:11:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:10.104 16:11:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:10.105 16:11:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:10.105 16:11:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:10.105 16:11:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:10.105 16:11:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.105 16:11:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.105 16:11:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.105 16:11:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:10.105 16:11:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.105 16:11:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.105 16:11:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:10.105 16:11:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.105 16:11:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.105 16:11:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:10.105 16:11:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:10.105 16:11:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.105 16:11:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.105 16:11:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.105 16:11:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.105 16:11:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:10.105 16:11:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.105 16:11:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.105 16:11:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.105 16:11:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:10.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:29:10.105 00:29:10.105 --- 10.0.0.2 ping statistics --- 00:29:10.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.105 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:29:10.105 16:11:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:29:10.105 00:29:10.105 --- 10.0.0.1 ping statistics --- 00:29:10.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.105 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:29:10.105 16:11:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.105 16:11:49 -- nvmf/common.sh@411 -- # return 0 00:29:10.105 16:11:49 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:29:10.105 16:11:49 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:12.632 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:12.632 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:12.632 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:12.632 16:11:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.632 16:11:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:12.632 16:11:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:12.632 16:11:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.632 16:11:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:12.632 16:11:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:12.632 16:11:51 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:12.632 16:11:51 -- target/dif.sh@137 -- # nvmfappstart 00:29:12.632 16:11:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:12.632 16:11:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:12.632 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:12.632 16:11:51 -- nvmf/common.sh@470 -- # nvmfpid=2621180 00:29:12.632 16:11:51 -- nvmf/common.sh@471 -- # waitforlisten 2621180 00:29:12.632 16:11:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:12.632 16:11:51 -- common/autotest_common.sh@817 -- # '[' -z 2621180 ']' 00:29:12.632 16:11:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.632 16:11:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:12.632 16:11:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.632 16:11:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:12.632 16:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:12.632 [2024-04-26 16:11:51.983127] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:12.632 [2024-04-26 16:11:51.983213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.632 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.632 [2024-04-26 16:11:52.092012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.632 [2024-04-26 16:11:52.306169] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.632 [2024-04-26 16:11:52.306217] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.632 [2024-04-26 16:11:52.306228] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.632 [2024-04-26 16:11:52.306237] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.632 [2024-04-26 16:11:52.306248] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.632 [2024-04-26 16:11:52.306276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.199 16:11:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:13.199 16:11:52 -- common/autotest_common.sh@850 -- # return 0 00:29:13.199 16:11:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:13.199 16:11:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:13.199 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:13.199 16:11:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.199 16:11:52 -- target/dif.sh@139 -- # create_transport 00:29:13.199 16:11:52 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:13.199 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.199 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:13.199 [2024-04-26 16:11:52.794263] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.199 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.199 16:11:52 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:13.199 16:11:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.199 16:11:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.199 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:13.457 ************************************ 00:29:13.457 START TEST fio_dif_1_default 00:29:13.457 ************************************ 00:29:13.457 16:11:52 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:29:13.457 16:11:52 -- target/dif.sh@86 -- # create_subsystems 0 00:29:13.457 16:11:52 -- target/dif.sh@28 -- # local sub 00:29:13.457 16:11:52 -- target/dif.sh@30 -- # for sub in "$@" 00:29:13.457 16:11:52 -- target/dif.sh@31 -- # create_subsystem 0 00:29:13.457 16:11:52 -- target/dif.sh@18 -- # local sub_id=0 00:29:13.457 16:11:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:13.457 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.457 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:13.457 bdev_null0 00:29:13.457 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.457 16:11:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:13.457 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.457 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:13.457 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.457 16:11:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:13.457 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.457 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:13.457 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.457 16:11:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:13.457 16:11:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.457 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:13.457 [2024-04-26 16:11:52.942788] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.457 16:11:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.457 16:11:52 -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:13.457 16:11:52 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:13.457 16:11:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:13.457 16:11:52 -- nvmf/common.sh@521 -- # config=() 00:29:13.457 16:11:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.457 16:11:52 -- nvmf/common.sh@521 -- # local subsystem config 00:29:13.457 16:11:52 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.457 16:11:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:13.457 16:11:52 -- target/dif.sh@82 -- # gen_fio_conf 00:29:13.457 16:11:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:13.457 { 00:29:13.457 "params": { 00:29:13.457 "name": "Nvme$subsystem", 00:29:13.457 "trtype": "$TEST_TRANSPORT", 00:29:13.457 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.457 "adrfam": "ipv4", 00:29:13.457 "trsvcid": "$NVMF_PORT", 00:29:13.457 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.457 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.457 "hdgst": ${hdgst:-false}, 00:29:13.457 "ddgst": ${ddgst:-false} 00:29:13.457 }, 00:29:13.457 "method": "bdev_nvme_attach_controller" 00:29:13.457 } 00:29:13.457 EOF 00:29:13.457 )") 00:29:13.457 16:11:52 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:13.457 16:11:52 -- target/dif.sh@54 -- # local file 00:29:13.457 16:11:52 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:13.457 16:11:52 -- target/dif.sh@56 -- # cat 00:29:13.457 16:11:52 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:13.457 16:11:52 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:13.457 16:11:52 -- common/autotest_common.sh@1327 -- # shift 00:29:13.457 16:11:52 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:13.457 16:11:52 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.457 16:11:52 -- nvmf/common.sh@543 -- # cat 00:29:13.457 16:11:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:13.457 16:11:52 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:13.457 16:11:52 -- target/dif.sh@72 -- # (( file <= files )) 00:29:13.457 16:11:52 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:13.457 16:11:52 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:13.457 16:11:52 -- nvmf/common.sh@545 -- # jq . 00:29:13.457 16:11:52 -- nvmf/common.sh@546 -- # IFS=, 00:29:13.457 16:11:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:13.457 "params": { 00:29:13.457 "name": "Nvme0", 00:29:13.457 "trtype": "tcp", 00:29:13.457 "traddr": "10.0.0.2", 00:29:13.457 "adrfam": "ipv4", 00:29:13.457 "trsvcid": "4420", 00:29:13.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.457 "hdgst": false, 00:29:13.457 "ddgst": false 00:29:13.457 }, 00:29:13.457 "method": "bdev_nvme_attach_controller" 00:29:13.457 }' 00:29:13.457 16:11:52 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:13.457 16:11:52 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:13.457 16:11:52 -- common/autotest_common.sh@1333 -- # break 00:29:13.457 16:11:52 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:13.457 16:11:52 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.715 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:13.715 fio-3.35 00:29:13.715 Starting 1 thread 00:29:13.715 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.913 00:29:25.913 filename0: (groupid=0, jobs=1): err= 0: pid=2621567: Fri Apr 26 16:12:04 2024 00:29:25.913 read: IOPS=181, BW=727KiB/s (744kB/s)(7280KiB/10018msec) 00:29:25.913 slat (nsec): min=4688, max=23403, avg=8278.10, stdev=2123.87 00:29:25.913 clat (usec): min=994, max=44152, avg=21990.78, stdev=20488.69 00:29:25.913 lat (usec): min=1001, max=44169, avg=21999.06, stdev=20488.57 00:29:25.913 clat percentiles (usec): 00:29:25.913 | 1.00th=[ 1352], 5.00th=[ 1434], 10.00th=[ 1434], 20.00th=[ 1450], 00:29:25.913 | 30.00th=[ 1450], 40.00th=[ 1483], 50.00th=[41681], 60.00th=[42206], 00:29:25.913 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:29:25.913 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:29:25.913 | 99.99th=[44303] 00:29:25.913 bw ( KiB/s): min= 704, max= 768, per=99.90%, avg=726.40, stdev=31.32, samples=20 00:29:25.913 iops : min= 176, max= 192, avg=181.60, stdev= 7.83, samples=20 00:29:25.913 lat (usec) : 1000=0.05% 00:29:25.913 lat (msec) : 2=49.45%, 4=0.38%, 50=50.11% 00:29:25.913 cpu : usr=95.22%, sys=4.43%, ctx=14, majf=0, minf=1634 00:29:25.913 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:25.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:25.913 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:25.913 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:25.913 00:29:25.913 Run status group 0 (all jobs): 00:29:25.913 READ: bw=727KiB/s (744kB/s), 727KiB/s-727KiB/s (744kB/s-744kB/s), io=7280KiB (7455kB), run=10018-10018msec 00:29:25.913 ----------------------------------------------------- 00:29:25.913 Suppressions used: 00:29:25.913 count bytes template 00:29:25.913 1 8 /usr/src/fio/parse.c 00:29:25.913 1 8 libtcmalloc_minimal.so 00:29:25.913 1 904 libcrypto.so 00:29:25.913 ----------------------------------------------------- 00:29:25.913 00:29:25.913 16:12:05 -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:25.913 16:12:05 -- target/dif.sh@43 -- # local sub 00:29:25.913 16:12:05 -- target/dif.sh@45 -- # for sub in "$@" 00:29:25.913 16:12:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:25.913 16:12:05 -- target/dif.sh@36 -- # local sub_id=0 00:29:25.913 16:12:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:25.913 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.913 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.913 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.913 16:12:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:25.913 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.913 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.913 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.913 00:29:25.913 real 0m12.351s 00:29:25.913 user 0m16.991s 00:29:25.913 sys 0m0.868s 00:29:25.913 16:12:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:25.913 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.913 ************************************ 00:29:25.913 END TEST fio_dif_1_default 00:29:25.913 ************************************ 00:29:25.913 16:12:05 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:25.913 16:12:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:25.913 16:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:25.913 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.913 ************************************ 00:29:25.913 START TEST fio_dif_1_multi_subsystems 00:29:25.913 ************************************ 00:29:25.913 16:12:05 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:29:25.913 16:12:05 -- target/dif.sh@92 -- # local files=1 00:29:25.913 16:12:05 -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:25.913 16:12:05 -- target/dif.sh@28 -- # local sub 00:29:25.913 16:12:05 -- target/dif.sh@30 -- # for sub in "$@" 00:29:25.913 16:12:05 -- target/dif.sh@31 -- # create_subsystem 0 00:29:25.914 16:12:05 -- target/dif.sh@18 -- # local sub_id=0 00:29:25.914 16:12:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:25.914 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.914 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.914 bdev_null0 00:29:25.914 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.914 16:12:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:25.914 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.914 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.914 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.914 16:12:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:25.914 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.914 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.914 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.914 16:12:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.914 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.914 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.914 [2024-04-26 16:12:05.433912] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.914 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.914 16:12:05 -- target/dif.sh@30 -- # for sub in "$@" 00:29:25.914 16:12:05 -- target/dif.sh@31 -- # create_subsystem 1 00:29:25.914 16:12:05 -- target/dif.sh@18 -- # local sub_id=1 00:29:25.914 16:12:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:25.914 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.914 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.914 bdev_null1 00:29:25.914 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.914 16:12:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:25.914 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.914 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.914 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.914 16:12:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:25.914 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.914 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.914 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.914 16:12:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.914 16:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.914 16:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:25.914 16:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.914 16:12:05 -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:25.914 16:12:05 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:25.914 16:12:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:25.914 16:12:05 -- nvmf/common.sh@521 -- # config=() 00:29:25.914 16:12:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:25.914 16:12:05 -- nvmf/common.sh@521 -- # local subsystem config 00:29:25.914 16:12:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:25.914 16:12:05 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:25.914 16:12:05 -- target/dif.sh@82 -- # gen_fio_conf 00:29:25.914 16:12:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:25.914 { 00:29:25.914 "params": { 00:29:25.914 "name": "Nvme$subsystem", 00:29:25.914 "trtype": "$TEST_TRANSPORT", 00:29:25.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.914 "adrfam": "ipv4", 00:29:25.914 "trsvcid": "$NVMF_PORT", 00:29:25.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.914 "hdgst": ${hdgst:-false}, 00:29:25.914 "ddgst": ${ddgst:-false} 00:29:25.914 }, 00:29:25.914 "method": "bdev_nvme_attach_controller" 00:29:25.914 } 00:29:25.914 EOF 00:29:25.914 )") 00:29:25.914 16:12:05 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:25.914 16:12:05 -- target/dif.sh@54 -- # local file 00:29:25.914 16:12:05 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:25.914 16:12:05 -- target/dif.sh@56 -- # cat 00:29:25.914 16:12:05 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:25.914 16:12:05 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:25.914 16:12:05 -- common/autotest_common.sh@1327 -- # shift 00:29:25.914 16:12:05 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:25.914 16:12:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:25.914 16:12:05 -- nvmf/common.sh@543 -- # cat 00:29:25.914 16:12:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:25.914 16:12:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:25.914 16:12:05 -- target/dif.sh@72 -- # (( file <= files )) 00:29:25.914 16:12:05 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:25.914 16:12:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:25.914 16:12:05 -- target/dif.sh@73 -- # cat 00:29:25.914 16:12:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:25.914 { 00:29:25.914 "params": { 00:29:25.914 "name": "Nvme$subsystem", 00:29:25.914 "trtype": "$TEST_TRANSPORT", 00:29:25.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.914 "adrfam": "ipv4", 00:29:25.914 "trsvcid": "$NVMF_PORT", 00:29:25.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.914 "hdgst": ${hdgst:-false}, 00:29:25.914 "ddgst": ${ddgst:-false} 00:29:25.914 }, 00:29:25.914 "method": "bdev_nvme_attach_controller" 00:29:25.914 } 00:29:25.914 EOF 00:29:25.914 )") 00:29:25.914 16:12:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:25.914 16:12:05 -- nvmf/common.sh@543 -- # cat 00:29:25.914 16:12:05 -- target/dif.sh@72 -- # (( file++ )) 00:29:25.914 16:12:05 -- target/dif.sh@72 -- # (( file <= files )) 00:29:25.914 16:12:05 -- nvmf/common.sh@545 -- # jq . 00:29:25.914 16:12:05 -- nvmf/common.sh@546 -- # IFS=, 00:29:25.914 16:12:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:25.914 "params": { 00:29:25.914 "name": "Nvme0", 00:29:25.914 "trtype": "tcp", 00:29:25.914 "traddr": "10.0.0.2", 00:29:25.914 "adrfam": "ipv4", 00:29:25.914 "trsvcid": "4420", 00:29:25.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:25.914 "hdgst": false, 00:29:25.914 "ddgst": false 00:29:25.914 }, 00:29:25.914 "method": "bdev_nvme_attach_controller" 00:29:25.914 },{ 00:29:25.914 "params": { 00:29:25.914 "name": "Nvme1", 00:29:25.914 "trtype": "tcp", 00:29:25.914 "traddr": "10.0.0.2", 00:29:25.914 "adrfam": "ipv4", 00:29:25.914 "trsvcid": "4420", 00:29:25.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.914 "hdgst": false, 00:29:25.914 "ddgst": false 00:29:25.914 }, 00:29:25.914 "method": "bdev_nvme_attach_controller" 00:29:25.914 }' 00:29:25.914 16:12:05 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:25.914 16:12:05 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:25.914 16:12:05 -- common/autotest_common.sh@1333 -- # break 00:29:25.914 16:12:05 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:25.914 16:12:05 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:26.173 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:26.173 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:26.173 fio-3.35 00:29:26.173 Starting 2 threads 00:29:26.431 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.634 00:29:38.634 filename0: (groupid=0, jobs=1): err= 0: pid=2623884: Fri Apr 26 16:12:17 2024 00:29:38.634 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10017msec) 00:29:38.634 slat (nsec): min=6916, max=39263, avg=8982.15, stdev=2815.11 00:29:38.634 clat (usec): min=41162, max=43501, avg=42059.11, stdev=301.45 00:29:38.634 lat (usec): min=41169, max=43527, avg=42068.09, stdev=301.54 00:29:38.634 clat percentiles (usec): 00:29:38.634 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:29:38.634 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:29:38.634 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:29:38.634 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:29:38.634 | 99.99th=[43254] 00:29:38.634 bw ( KiB/s): min= 352, max= 384, per=34.35%, avg=379.20, stdev=11.72, samples=20 00:29:38.634 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:29:38.634 lat (msec) : 50=100.00% 00:29:38.634 cpu : usr=97.38%, sys=2.33%, ctx=14, majf=0, minf=1634 00:29:38.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.634 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:38.634 filename1: (groupid=0, jobs=1): err= 0: pid=2623885: Fri Apr 26 16:12:17 2024 00:29:38.634 read: IOPS=180, BW=724KiB/s (741kB/s)(7264KiB/10035msec) 00:29:38.634 slat (nsec): min=6933, max=27833, avg=8687.15, stdev=2283.68 00:29:38.634 clat (usec): min=1318, max=43430, avg=22076.10, stdev=20428.48 00:29:38.634 lat (usec): min=1325, max=43441, avg=22084.79, stdev=20428.32 00:29:38.634 clat percentiles (usec): 00:29:38.634 | 1.00th=[ 1500], 5.00th=[ 1532], 10.00th=[ 1532], 20.00th=[ 1549], 00:29:38.634 | 30.00th=[ 1549], 40.00th=[ 1582], 50.00th=[41157], 60.00th=[42206], 00:29:38.634 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:29:38.634 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:29:38.634 | 99.99th=[43254] 00:29:38.634 bw ( KiB/s): min= 672, max= 768, per=65.62%, avg=724.80, stdev=29.87, samples=20 00:29:38.634 iops : min= 168, max= 192, avg=181.20, stdev= 7.47, samples=20 00:29:38.634 lat (msec) : 2=49.78%, 50=50.22% 00:29:38.634 cpu : usr=97.54%, sys=2.16%, ctx=14, majf=0, minf=1634 00:29:38.634 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:38.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.634 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.634 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:38.634 00:29:38.634 Run status group 0 (all jobs): 00:29:38.634 READ: bw=1103KiB/s (1130kB/s), 380KiB/s-724KiB/s (389kB/s-741kB/s), io=10.8MiB (11.3MB), run=10017-10035msec 00:29:38.634 ----------------------------------------------------- 00:29:38.634 Suppressions used: 00:29:38.634 count bytes template 00:29:38.634 2 16 /usr/src/fio/parse.c 00:29:38.634 1 8 libtcmalloc_minimal.so 00:29:38.634 1 904 libcrypto.so 00:29:38.634 ----------------------------------------------------- 00:29:38.634 00:29:38.634 16:12:17 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:38.635 16:12:17 -- target/dif.sh@43 -- # local sub 00:29:38.635 16:12:17 -- target/dif.sh@45 -- # for sub in "$@" 00:29:38.635 16:12:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:38.635 16:12:17 -- target/dif.sh@36 -- # local sub_id=0 00:29:38.635 16:12:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:38.635 16:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.635 16:12:17 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 16:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.635 16:12:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:38.635 16:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 16:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.635 16:12:18 -- target/dif.sh@45 -- # for sub in "$@" 00:29:38.635 16:12:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:38.635 16:12:18 -- target/dif.sh@36 -- # local sub_id=1 00:29:38.635 16:12:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:38.635 16:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 16:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.635 16:12:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:38.635 16:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 16:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.635 00:29:38.635 real 0m12.630s 00:29:38.635 user 0m27.855s 00:29:38.635 sys 0m1.031s 00:29:38.635 16:12:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 ************************************ 00:29:38.635 END TEST fio_dif_1_multi_subsystems 00:29:38.635 ************************************ 00:29:38.635 16:12:18 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:38.635 16:12:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:38.635 16:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 ************************************ 00:29:38.635 START TEST fio_dif_rand_params 00:29:38.635 ************************************ 00:29:38.635 16:12:18 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:29:38.635 16:12:18 -- target/dif.sh@100 -- # local NULL_DIF 00:29:38.635 16:12:18 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:38.635 16:12:18 -- target/dif.sh@103 -- # NULL_DIF=3 00:29:38.635 16:12:18 -- target/dif.sh@103 -- # bs=128k 00:29:38.635 16:12:18 -- target/dif.sh@103 -- # numjobs=3 00:29:38.635 16:12:18 -- target/dif.sh@103 -- # iodepth=3 00:29:38.635 16:12:18 -- target/dif.sh@103 -- # runtime=5 00:29:38.635 16:12:18 -- target/dif.sh@105 -- # create_subsystems 0 00:29:38.635 16:12:18 -- target/dif.sh@28 -- # local sub 00:29:38.635 16:12:18 -- target/dif.sh@30 -- # for sub in "$@" 00:29:38.635 16:12:18 -- target/dif.sh@31 -- # create_subsystem 0 00:29:38.635 16:12:18 -- target/dif.sh@18 -- # local sub_id=0 00:29:38.635 16:12:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:38.635 16:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 bdev_null0 00:29:38.635 16:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.635 16:12:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:38.635 16:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 16:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.635 16:12:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:38.635 16:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 16:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.635 16:12:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:38.635 16:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.635 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:38.635 [2024-04-26 16:12:18.233408] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.635 16:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.635 16:12:18 -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:38.635 16:12:18 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:38.635 16:12:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:38.635 16:12:18 -- nvmf/common.sh@521 -- # config=() 00:29:38.635 16:12:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:38.635 16:12:18 -- nvmf/common.sh@521 -- # local subsystem config 00:29:38.635 16:12:18 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:38.635 16:12:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:38.635 16:12:18 -- target/dif.sh@82 -- # gen_fio_conf 00:29:38.635 16:12:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:38.635 { 00:29:38.635 "params": { 00:29:38.635 "name": "Nvme$subsystem", 00:29:38.635 "trtype": "$TEST_TRANSPORT", 00:29:38.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.635 "adrfam": "ipv4", 00:29:38.635 "trsvcid": "$NVMF_PORT", 00:29:38.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.635 "hdgst": ${hdgst:-false}, 00:29:38.635 "ddgst": ${ddgst:-false} 00:29:38.635 }, 00:29:38.635 "method": "bdev_nvme_attach_controller" 00:29:38.635 } 00:29:38.635 EOF 00:29:38.635 )") 00:29:38.635 16:12:18 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:38.635 16:12:18 -- target/dif.sh@54 -- # local file 00:29:38.635 16:12:18 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:38.635 16:12:18 -- target/dif.sh@56 -- # cat 00:29:38.635 16:12:18 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:38.635 16:12:18 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:38.635 16:12:18 -- common/autotest_common.sh@1327 -- # shift 00:29:38.635 16:12:18 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:38.635 16:12:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.635 16:12:18 -- nvmf/common.sh@543 -- # cat 00:29:38.635 16:12:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:38.635 16:12:18 -- target/dif.sh@72 -- # (( file <= files )) 00:29:38.635 16:12:18 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:38.635 16:12:18 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:38.635 16:12:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:38.635 16:12:18 -- nvmf/common.sh@545 -- # jq . 00:29:38.635 16:12:18 -- nvmf/common.sh@546 -- # IFS=, 00:29:38.635 16:12:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:38.635 "params": { 00:29:38.635 "name": "Nvme0", 00:29:38.635 "trtype": "tcp", 00:29:38.635 "traddr": "10.0.0.2", 00:29:38.635 "adrfam": "ipv4", 00:29:38.635 "trsvcid": "4420", 00:29:38.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:38.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:38.635 "hdgst": false, 00:29:38.635 "ddgst": false 00:29:38.635 }, 00:29:38.635 "method": "bdev_nvme_attach_controller" 00:29:38.635 }' 00:29:38.635 16:12:18 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:38.635 16:12:18 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:38.635 16:12:18 -- common/autotest_common.sh@1333 -- # break 00:29:38.635 16:12:18 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:38.635 16:12:18 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:39.282 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:39.282 ... 00:29:39.282 fio-3.35 00:29:39.282 Starting 3 threads 00:29:39.282 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.856 00:29:45.856 filename0: (groupid=0, jobs=1): err= 0: pid=2626466: Fri Apr 26 16:12:24 2024 00:29:45.856 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(118MiB/5004msec) 00:29:45.856 slat (nsec): min=7208, max=90939, avg=14857.87, stdev=7438.29 00:29:45.856 clat (usec): min=5436, max=97377, avg=15910.31, stdev=15923.39 00:29:45.856 lat (usec): min=5458, max=97402, avg=15925.17, stdev=15923.77 00:29:45.856 clat percentiles (usec): 00:29:45.856 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7635], 00:29:45.856 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[10945], 00:29:45.856 | 70.00th=[12125], 80.00th=[13304], 90.00th=[52167], 95.00th=[53740], 00:29:45.856 | 99.00th=[57410], 99.50th=[58459], 99.90th=[96994], 99.95th=[96994], 00:29:45.856 | 99.99th=[96994] 00:29:45.856 bw ( KiB/s): min=16896, max=33024, per=32.85%, avg=24038.40, stdev=4809.73, samples=10 00:29:45.856 iops : min= 132, max= 258, avg=187.80, stdev=37.58, samples=10 00:29:45.856 lat (msec) : 10=52.02%, 20=33.76%, 50=1.27%, 100=12.95% 00:29:45.856 cpu : usr=95.74%, sys=3.40%, ctx=221, majf=0, minf=1637 00:29:45.856 IO depths : 1=4.5%, 2=95.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.856 issued rwts: total=942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.856 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:45.856 filename0: (groupid=0, jobs=1): err= 0: pid=2626467: Fri Apr 26 16:12:24 2024 00:29:45.856 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(142MiB/5015msec) 00:29:45.856 slat (nsec): min=7211, max=41987, avg=13112.10, stdev=6190.77 00:29:45.856 clat (usec): min=5244, max=59083, avg=13220.78, stdev=13401.47 00:29:45.856 lat (usec): min=5253, max=59095, avg=13233.90, stdev=13401.78 00:29:45.856 clat percentiles (usec): 00:29:45.856 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 6849], 00:29:45.856 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9372], 00:29:45.856 | 70.00th=[10552], 80.00th=[11731], 90.00th=[47973], 95.00th=[51119], 00:29:45.856 | 99.00th=[56886], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:29:45.856 | 99.99th=[58983] 00:29:45.856 bw ( KiB/s): min=17664, max=34816, per=39.64%, avg=29004.80, stdev=5253.44, samples=10 00:29:45.856 iops : min= 138, max= 272, avg=226.60, stdev=41.04, samples=10 00:29:45.856 lat (msec) : 10=65.67%, 20=23.77%, 50=2.90%, 100=7.66% 00:29:45.856 cpu : usr=95.99%, sys=3.55%, ctx=8, majf=0, minf=1634 00:29:45.856 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.856 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.856 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:45.856 filename0: (groupid=0, jobs=1): err= 0: pid=2626468: Fri Apr 26 16:12:24 2024 00:29:45.856 read: IOPS=158, BW=19.8MiB/s (20.8MB/s)(99.5MiB/5028msec) 00:29:45.856 slat (nsec): min=7308, max=44475, avg=13325.54, stdev=6412.86 00:29:45.856 clat (usec): min=4738, max=59702, avg=18925.14, stdev=18184.20 00:29:45.856 lat (usec): min=4749, max=59720, avg=18938.46, stdev=18184.09 00:29:45.856 clat percentiles (usec): 00:29:45.856 | 1.00th=[ 5276], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7963], 00:29:45.856 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11863], 00:29:45.856 | 70.00th=[13435], 80.00th=[22676], 90.00th=[55837], 95.00th=[56886], 00:29:45.856 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:29:45.856 | 99.99th=[59507] 00:29:45.856 bw ( KiB/s): min= 9216, max=37888, per=27.75%, avg=20300.80, stdev=8481.16, samples=10 00:29:45.856 iops : min= 72, max= 296, avg=158.60, stdev=66.26, samples=10 00:29:45.856 lat (msec) : 10=44.85%, 20=34.80%, 50=2.01%, 100=18.34% 00:29:45.856 cpu : usr=96.64%, sys=2.96%, ctx=7, majf=0, minf=1636 00:29:45.856 IO depths : 1=4.4%, 2=95.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:45.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.856 issued rwts: total=796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.856 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:45.856 00:29:45.856 Run status group 0 (all jobs): 00:29:45.856 READ: bw=71.4MiB/s (74.9MB/s), 19.8MiB/s-28.3MiB/s (20.8MB/s-29.7MB/s), io=359MiB (377MB), run=5004-5028msec 00:29:46.117 ----------------------------------------------------- 00:29:46.117 Suppressions used: 00:29:46.117 count bytes template 00:29:46.117 5 44 /usr/src/fio/parse.c 00:29:46.117 1 8 libtcmalloc_minimal.so 00:29:46.117 1 904 libcrypto.so 00:29:46.117 ----------------------------------------------------- 00:29:46.117 00:29:46.117 16:12:25 -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:46.117 16:12:25 -- target/dif.sh@43 -- # local sub 00:29:46.117 16:12:25 -- target/dif.sh@45 -- # for sub in "$@" 00:29:46.117 16:12:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:46.117 16:12:25 -- target/dif.sh@36 -- # local sub_id=0 00:29:46.117 16:12:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@109 -- # NULL_DIF=2 00:29:46.117 16:12:25 -- target/dif.sh@109 -- # bs=4k 00:29:46.117 16:12:25 -- target/dif.sh@109 -- # numjobs=8 00:29:46.117 16:12:25 -- target/dif.sh@109 -- # iodepth=16 00:29:46.117 16:12:25 -- target/dif.sh@109 -- # runtime= 00:29:46.117 16:12:25 -- target/dif.sh@109 -- # files=2 00:29:46.117 16:12:25 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:46.117 16:12:25 -- target/dif.sh@28 -- # local sub 00:29:46.117 16:12:25 -- target/dif.sh@30 -- # for sub in "$@" 00:29:46.117 16:12:25 -- target/dif.sh@31 -- # create_subsystem 0 00:29:46.117 16:12:25 -- target/dif.sh@18 -- # local sub_id=0 00:29:46.117 16:12:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 bdev_null0 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 [2024-04-26 16:12:25.620214] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@30 -- # for sub in "$@" 00:29:46.117 16:12:25 -- target/dif.sh@31 -- # create_subsystem 1 00:29:46.117 16:12:25 -- target/dif.sh@18 -- # local sub_id=1 00:29:46.117 16:12:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 bdev_null1 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@30 -- # for sub in "$@" 00:29:46.117 16:12:25 -- target/dif.sh@31 -- # create_subsystem 2 00:29:46.117 16:12:25 -- target/dif.sh@18 -- # local sub_id=2 00:29:46.117 16:12:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 bdev_null2 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:46.117 16:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:46.117 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:46.117 16:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:46.117 16:12:25 -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:46.117 16:12:25 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:46.117 16:12:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:46.117 16:12:25 -- nvmf/common.sh@521 -- # config=() 00:29:46.117 16:12:25 -- nvmf/common.sh@521 -- # local subsystem config 00:29:46.117 16:12:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:46.117 16:12:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:46.117 { 00:29:46.117 "params": { 00:29:46.117 "name": "Nvme$subsystem", 00:29:46.117 "trtype": "$TEST_TRANSPORT", 00:29:46.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.117 "adrfam": "ipv4", 00:29:46.117 "trsvcid": "$NVMF_PORT", 00:29:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.117 "hdgst": ${hdgst:-false}, 00:29:46.117 "ddgst": ${ddgst:-false} 00:29:46.117 }, 00:29:46.117 "method": "bdev_nvme_attach_controller" 00:29:46.117 } 00:29:46.117 EOF 00:29:46.117 )") 00:29:46.117 16:12:25 -- target/dif.sh@82 -- # gen_fio_conf 00:29:46.117 16:12:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.117 16:12:25 -- target/dif.sh@54 -- # local file 00:29:46.117 16:12:25 -- target/dif.sh@56 -- # cat 00:29:46.117 16:12:25 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.117 16:12:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:46.117 16:12:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:46.117 16:12:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:46.117 16:12:25 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:46.117 16:12:25 -- nvmf/common.sh@543 -- # cat 00:29:46.117 16:12:25 -- common/autotest_common.sh@1327 -- # shift 00:29:46.117 16:12:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:46.117 16:12:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.117 16:12:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:46.117 16:12:25 -- target/dif.sh@72 -- # (( file <= files )) 00:29:46.117 16:12:25 -- target/dif.sh@73 -- # cat 00:29:46.117 16:12:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:46.117 16:12:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:46.117 16:12:25 -- target/dif.sh@72 -- # (( file++ )) 00:29:46.117 16:12:25 -- target/dif.sh@72 -- # (( file <= files )) 00:29:46.117 16:12:25 -- target/dif.sh@73 -- # cat 00:29:46.117 16:12:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:46.117 16:12:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:46.117 16:12:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:46.117 { 00:29:46.117 "params": { 00:29:46.117 "name": "Nvme$subsystem", 00:29:46.117 "trtype": "$TEST_TRANSPORT", 00:29:46.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.117 "adrfam": "ipv4", 00:29:46.117 "trsvcid": "$NVMF_PORT", 00:29:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.117 "hdgst": ${hdgst:-false}, 00:29:46.117 "ddgst": ${ddgst:-false} 00:29:46.117 }, 00:29:46.117 "method": "bdev_nvme_attach_controller" 00:29:46.117 } 00:29:46.117 EOF 00:29:46.117 )") 00:29:46.117 16:12:25 -- nvmf/common.sh@543 -- # cat 00:29:46.117 16:12:25 -- target/dif.sh@72 -- # (( file++ )) 00:29:46.117 16:12:25 -- target/dif.sh@72 -- # (( file <= files )) 00:29:46.117 16:12:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:46.117 16:12:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:46.117 { 00:29:46.117 "params": { 00:29:46.117 "name": "Nvme$subsystem", 00:29:46.117 "trtype": "$TEST_TRANSPORT", 00:29:46.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.117 "adrfam": "ipv4", 00:29:46.117 "trsvcid": "$NVMF_PORT", 00:29:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.117 "hdgst": ${hdgst:-false}, 00:29:46.117 "ddgst": ${ddgst:-false} 00:29:46.117 }, 00:29:46.117 "method": "bdev_nvme_attach_controller" 00:29:46.117 } 00:29:46.117 EOF 00:29:46.117 )") 00:29:46.117 16:12:25 -- nvmf/common.sh@543 -- # cat 00:29:46.117 16:12:25 -- nvmf/common.sh@545 -- # jq . 00:29:46.117 16:12:25 -- nvmf/common.sh@546 -- # IFS=, 00:29:46.117 16:12:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:46.117 "params": { 00:29:46.117 "name": "Nvme0", 00:29:46.117 "trtype": "tcp", 00:29:46.117 "traddr": "10.0.0.2", 00:29:46.117 "adrfam": "ipv4", 00:29:46.117 "trsvcid": "4420", 00:29:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.117 "hdgst": false, 00:29:46.117 "ddgst": false 00:29:46.117 }, 00:29:46.117 "method": "bdev_nvme_attach_controller" 00:29:46.117 },{ 00:29:46.117 "params": { 00:29:46.117 "name": "Nvme1", 00:29:46.117 "trtype": "tcp", 00:29:46.117 "traddr": "10.0.0.2", 00:29:46.117 "adrfam": "ipv4", 00:29:46.117 "trsvcid": "4420", 00:29:46.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:46.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:46.117 "hdgst": false, 00:29:46.117 "ddgst": false 00:29:46.117 }, 00:29:46.117 "method": "bdev_nvme_attach_controller" 00:29:46.117 },{ 00:29:46.117 "params": { 00:29:46.117 "name": "Nvme2", 00:29:46.117 "trtype": "tcp", 00:29:46.118 "traddr": "10.0.0.2", 00:29:46.118 "adrfam": "ipv4", 00:29:46.118 "trsvcid": "4420", 00:29:46.118 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:46.118 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:46.118 "hdgst": false, 00:29:46.118 "ddgst": false 00:29:46.118 }, 00:29:46.118 "method": "bdev_nvme_attach_controller" 00:29:46.118 }' 00:29:46.118 16:12:25 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:46.118 16:12:25 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:46.118 16:12:25 -- common/autotest_common.sh@1333 -- # break 00:29:46.118 16:12:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:46.118 16:12:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.375 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:46.375 ... 00:29:46.375 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:46.375 ... 00:29:46.375 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:46.375 ... 00:29:46.375 fio-3.35 00:29:46.375 Starting 24 threads 00:29:46.635 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.851 00:29:58.851 filename0: (groupid=0, jobs=1): err= 0: pid=2627800: Fri Apr 26 16:12:37 2024 00:29:58.851 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.2MiB/10036msec) 00:29:58.851 slat (usec): min=7, max=223, avg=31.72, stdev=19.01 00:29:58.851 clat (usec): min=14468, max=57683, avg=32421.34, stdev=2712.93 00:29:58.851 lat (usec): min=14520, max=57717, avg=32453.06, stdev=2713.50 00:29:58.851 clat percentiles (usec): 00:29:58.851 | 1.00th=[19530], 5.00th=[30278], 10.00th=[31327], 20.00th=[31851], 00:29:58.851 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:29:58.851 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:58.851 | 99.00th=[38536], 99.50th=[40633], 99.90th=[57410], 99.95th=[57410], 00:29:58.851 | 99.99th=[57934] 00:29:58.851 bw ( KiB/s): min= 1872, max= 2048, per=4.30%, avg=1957.20, stdev=61.33, samples=20 00:29:58.851 iops : min= 468, max= 512, avg=489.30, stdev=15.33, samples=20 00:29:58.851 lat (msec) : 20=1.18%, 50=98.70%, 100=0.12% 00:29:58.851 cpu : usr=94.87%, sys=2.26%, ctx=101, majf=0, minf=1634 00:29:58.851 IO depths : 1=5.6%, 2=11.7%, 4=24.6%, 8=51.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:29:58.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.851 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.851 issued rwts: total=4910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.851 filename0: (groupid=0, jobs=1): err= 0: pid=2627801: Fri Apr 26 16:12:37 2024 00:29:58.851 read: IOPS=461, BW=1845KiB/s (1889kB/s)(18.0MiB/10003msec) 00:29:58.851 slat (usec): min=6, max=106, avg=31.74, stdev=19.23 00:29:58.852 clat (usec): min=7884, max=63945, avg=34493.59, stdev=5987.97 00:29:58.852 lat (usec): min=7894, max=63953, avg=34525.32, stdev=5987.23 00:29:58.852 clat percentiles (usec): 00:29:58.852 | 1.00th=[18744], 5.00th=[27657], 10.00th=[31327], 20.00th=[32113], 00:29:58.852 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:58.852 | 70.00th=[33817], 80.00th=[35914], 90.00th=[42206], 95.00th=[46924], 00:29:58.852 | 99.00th=[55837], 99.50th=[60031], 99.90th=[63701], 99.95th=[63701], 00:29:58.852 | 99.99th=[63701] 00:29:58.852 bw ( KiB/s): min= 1584, max= 2048, per=4.05%, avg=1841.26, stdev=132.70, samples=19 00:29:58.852 iops : min= 396, max= 512, avg=460.32, stdev=33.17, samples=19 00:29:58.852 lat (msec) : 10=0.26%, 20=1.19%, 50=95.62%, 100=2.93% 00:29:58.852 cpu : usr=98.96%, sys=0.63%, ctx=84, majf=0, minf=1634 00:29:58.852 IO depths : 1=2.6%, 2=5.3%, 4=14.5%, 8=66.7%, 16=11.0%, 32=0.0%, >=64=0.0% 00:29:58.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 complete : 0=0.0%, 4=91.7%, 8=3.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 issued rwts: total=4613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.852 filename0: (groupid=0, jobs=1): err= 0: pid=2627802: Fri Apr 26 16:12:37 2024 00:29:58.852 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.5MiB/10008msec) 00:29:58.852 slat (usec): min=3, max=135, avg=34.00, stdev=22.40 00:29:58.852 clat (usec): min=15298, max=75536, avg=33497.70, stdev=5322.99 00:29:58.852 lat (usec): min=15306, max=75551, avg=33531.70, stdev=5321.42 00:29:58.852 clat percentiles (usec): 00:29:58.852 | 1.00th=[19006], 5.00th=[27132], 10.00th=[31065], 20.00th=[31851], 00:29:58.852 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:29:58.852 | 70.00th=[33424], 80.00th=[33817], 90.00th=[36963], 95.00th=[42730], 00:29:58.852 | 99.00th=[54789], 99.50th=[57410], 99.90th=[76022], 99.95th=[76022], 00:29:58.852 | 99.99th=[76022] 00:29:58.852 bw ( KiB/s): min= 1418, max= 2096, per=4.16%, avg=1891.05, stdev=143.17, samples=19 00:29:58.852 iops : min= 354, max= 524, avg=472.74, stdev=35.88, samples=19 00:29:58.852 lat (msec) : 20=1.26%, 50=97.03%, 100=1.71% 00:29:58.852 cpu : usr=98.76%, sys=0.82%, ctx=14, majf=0, minf=1633 00:29:58.852 IO depths : 1=1.1%, 2=4.1%, 4=15.3%, 8=66.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:29:58.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 complete : 0=0.0%, 4=92.2%, 8=3.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 issued rwts: total=4747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.852 filename0: (groupid=0, jobs=1): err= 0: pid=2627804: Fri Apr 26 16:12:37 2024 00:29:58.852 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10019msec) 00:29:58.852 slat (usec): min=3, max=110, avg=28.17, stdev=18.36 00:29:58.852 clat (usec): min=22814, max=61122, avg=32959.43, stdev=2235.39 00:29:58.852 lat (usec): min=22822, max=61138, avg=32987.60, stdev=2234.21 00:29:58.852 clat percentiles (usec): 00:29:58.852 | 1.00th=[29492], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:29:58.852 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:29:58.852 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:29:58.852 | 99.00th=[40633], 99.50th=[43254], 99.90th=[61080], 99.95th=[61080], 00:29:58.852 | 99.99th=[61080] 00:29:58.852 bw ( KiB/s): min= 1740, max= 2048, per=4.24%, avg=1926.53, stdev=60.86, samples=19 00:29:58.852 iops : min= 435, max= 512, avg=481.63, stdev=15.21, samples=19 00:29:58.852 lat (msec) : 50=99.67%, 100=0.33% 00:29:58.852 cpu : usr=98.69%, sys=0.91%, ctx=14, majf=0, minf=1634 00:29:58.852 IO depths : 1=4.2%, 2=8.8%, 4=20.3%, 8=58.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:29:58.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.852 filename0: (groupid=0, jobs=1): err= 0: pid=2627805: Fri Apr 26 16:12:37 2024 00:29:58.852 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.1MiB/10003msec) 00:29:58.852 slat (usec): min=5, max=706, avg=37.42, stdev=22.52 00:29:58.852 clat (usec): min=10482, max=56731, avg=32412.59, stdev=4214.76 00:29:58.852 lat (usec): min=10521, max=56754, avg=32450.02, stdev=4216.57 00:29:58.852 clat percentiles (usec): 00:29:58.852 | 1.00th=[17957], 5.00th=[25035], 10.00th=[29492], 20.00th=[31589], 00:29:58.852 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:29:58.852 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[36963], 00:29:58.852 | 99.00th=[48497], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:29:58.852 | 99.99th=[56886] 00:29:58.852 bw ( KiB/s): min= 1792, max= 2096, per=4.31%, avg=1962.11, stdev=81.62, samples=19 00:29:58.852 iops : min= 448, max= 524, avg=490.53, stdev=20.41, samples=19 00:29:58.852 lat (msec) : 20=1.82%, 50=97.22%, 100=0.96% 00:29:58.852 cpu : usr=94.99%, sys=2.28%, ctx=123, majf=0, minf=1635 00:29:58.852 IO depths : 1=2.1%, 2=4.4%, 4=12.5%, 8=68.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:58.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 complete : 0=0.0%, 4=91.6%, 8=4.5%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 issued rwts: total=4900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.852 filename0: (groupid=0, jobs=1): err= 0: pid=2627806: Fri Apr 26 16:12:37 2024 00:29:58.852 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10026msec) 00:29:58.852 slat (usec): min=4, max=136, avg=29.03, stdev=20.61 00:29:58.852 clat (usec): min=13517, max=68862, avg=32717.02, stdev=4363.22 00:29:58.852 lat (usec): min=13533, max=68878, avg=32746.05, stdev=4364.81 00:29:58.852 clat percentiles (usec): 00:29:58.852 | 1.00th=[18220], 5.00th=[25035], 10.00th=[30540], 20.00th=[31851], 00:29:58.852 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:29:58.852 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[38011], 00:29:58.852 | 99.00th=[47449], 99.50th=[52167], 99.90th=[68682], 99.95th=[68682], 00:29:58.852 | 99.99th=[68682] 00:29:58.852 bw ( KiB/s): min= 1792, max= 2144, per=4.27%, avg=1942.05, stdev=77.44, samples=20 00:29:58.852 iops : min= 448, max= 536, avg=485.50, stdev=19.37, samples=20 00:29:58.852 lat (msec) : 20=1.95%, 50=97.46%, 100=0.59% 00:29:58.852 cpu : usr=98.64%, sys=0.93%, ctx=17, majf=0, minf=1633 00:29:58.852 IO depths : 1=1.8%, 2=4.0%, 4=12.4%, 8=69.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:29:58.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 complete : 0=0.0%, 4=91.2%, 8=4.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 issued rwts: total=4874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.852 filename0: (groupid=0, jobs=1): err= 0: pid=2627807: Fri Apr 26 16:12:37 2024 00:29:58.852 read: IOPS=467, BW=1870KiB/s (1914kB/s)(18.3MiB/10002msec) 00:29:58.852 slat (usec): min=4, max=107, avg=29.43, stdev=20.05 00:29:58.852 clat (usec): min=14736, max=61710, avg=34046.60, stdev=5492.66 00:29:58.852 lat (usec): min=14759, max=61726, avg=34076.03, stdev=5492.16 00:29:58.852 clat percentiles (usec): 00:29:58.852 | 1.00th=[20055], 5.00th=[26870], 10.00th=[30802], 20.00th=[31851], 00:29:58.852 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:58.852 | 70.00th=[33817], 80.00th=[34866], 90.00th=[41681], 95.00th=[45876], 00:29:58.852 | 99.00th=[53216], 99.50th=[53740], 99.90th=[61080], 99.95th=[61604], 00:29:58.852 | 99.99th=[61604] 00:29:58.852 bw ( KiB/s): min= 1664, max= 1928, per=4.11%, avg=1867.37, stdev=69.40, samples=19 00:29:58.852 iops : min= 416, max= 482, avg=466.84, stdev=17.35, samples=19 00:29:58.852 lat (msec) : 20=1.01%, 50=97.09%, 100=1.90% 00:29:58.852 cpu : usr=98.61%, sys=0.97%, ctx=14, majf=0, minf=1637 00:29:58.852 IO depths : 1=2.2%, 2=4.5%, 4=14.1%, 8=67.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:29:58.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 complete : 0=0.0%, 4=91.7%, 8=3.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 issued rwts: total=4675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.852 filename0: (groupid=0, jobs=1): err= 0: pid=2627809: Fri Apr 26 16:12:37 2024 00:29:58.852 read: IOPS=476, BW=1908KiB/s (1953kB/s)(18.7MiB/10019msec) 00:29:58.852 slat (usec): min=5, max=102, avg=30.66, stdev=20.59 00:29:58.852 clat (usec): min=14813, max=63358, avg=33353.84, stdev=5242.13 00:29:58.852 lat (usec): min=14836, max=63374, avg=33384.50, stdev=5242.39 00:29:58.852 clat percentiles (usec): 00:29:58.852 | 1.00th=[19530], 5.00th=[25297], 10.00th=[30278], 20.00th=[31851], 00:29:58.852 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:29:58.852 | 70.00th=[33424], 80.00th=[33817], 90.00th=[37487], 95.00th=[44303], 00:29:58.852 | 99.00th=[54264], 99.50th=[56886], 99.90th=[63177], 99.95th=[63177], 00:29:58.852 | 99.99th=[63177] 00:29:58.852 bw ( KiB/s): min= 1792, max= 2016, per=4.19%, avg=1904.80, stdev=69.50, samples=20 00:29:58.852 iops : min= 448, max= 504, avg=476.20, stdev=17.37, samples=20 00:29:58.852 lat (msec) : 20=1.42%, 50=96.88%, 100=1.70% 00:29:58.852 cpu : usr=98.51%, sys=1.05%, ctx=19, majf=0, minf=1633 00:29:58.852 IO depths : 1=1.5%, 2=3.1%, 4=9.8%, 8=72.1%, 16=13.5%, 32=0.0%, >=64=0.0% 00:29:58.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 complete : 0=0.0%, 4=90.8%, 8=5.9%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.852 issued rwts: total=4778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.852 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.852 filename1: (groupid=0, jobs=1): err= 0: pid=2627810: Fri Apr 26 16:12:37 2024 00:29:58.852 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.8MiB/10003msec) 00:29:58.852 slat (usec): min=6, max=192, avg=44.69, stdev=22.20 00:29:58.852 clat (usec): min=19891, max=92279, avg=32824.20, stdev=3471.84 00:29:58.853 lat (usec): min=19919, max=92301, avg=32868.89, stdev=3470.09 00:29:58.853 clat percentiles (usec): 00:29:58.853 | 1.00th=[23462], 5.00th=[30802], 10.00th=[31327], 20.00th=[31851], 00:29:58.853 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:29:58.853 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:29:58.853 | 99.00th=[43779], 99.50th=[46400], 99.90th=[72877], 99.95th=[91751], 00:29:58.853 | 99.99th=[92799] 00:29:58.853 bw ( KiB/s): min= 1795, max= 2048, per=4.22%, avg=1921.84, stdev=56.79, samples=19 00:29:58.853 iops : min= 448, max= 512, avg=480.42, stdev=14.29, samples=19 00:29:58.853 lat (msec) : 20=0.12%, 50=99.50%, 100=0.37% 00:29:58.853 cpu : usr=98.81%, sys=0.77%, ctx=17, majf=0, minf=1634 00:29:58.853 IO depths : 1=3.5%, 2=8.9%, 4=22.9%, 8=55.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:29:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 issued rwts: total=4820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.853 filename1: (groupid=0, jobs=1): err= 0: pid=2627811: Fri Apr 26 16:12:37 2024 00:29:58.853 read: IOPS=460, BW=1843KiB/s (1887kB/s)(18.0MiB/10015msec) 00:29:58.853 slat (usec): min=5, max=113, avg=35.18, stdev=24.09 00:29:58.853 clat (usec): min=12918, max=66925, avg=34491.00, stdev=5687.79 00:29:58.853 lat (usec): min=12935, max=66945, avg=34526.19, stdev=5682.99 00:29:58.853 clat percentiles (usec): 00:29:58.853 | 1.00th=[20579], 5.00th=[30016], 10.00th=[31589], 20.00th=[32113], 00:29:58.853 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:58.853 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42206], 95.00th=[47449], 00:29:58.853 | 99.00th=[54789], 99.50th=[59507], 99.90th=[66847], 99.95th=[66847], 00:29:58.853 | 99.99th=[66847] 00:29:58.853 bw ( KiB/s): min= 1536, max= 2000, per=4.03%, avg=1834.95, stdev=114.98, samples=19 00:29:58.853 iops : min= 384, max= 500, avg=458.74, stdev=28.75, samples=19 00:29:58.853 lat (msec) : 20=0.91%, 50=96.27%, 100=2.82% 00:29:58.853 cpu : usr=98.53%, sys=1.02%, ctx=19, majf=0, minf=1635 00:29:58.853 IO depths : 1=1.7%, 2=3.8%, 4=13.5%, 8=69.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:29:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 issued rwts: total=4614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.853 filename1: (groupid=0, jobs=1): err= 0: pid=2627812: Fri Apr 26 16:12:37 2024 00:29:58.853 read: IOPS=463, BW=1852KiB/s (1897kB/s)(18.1MiB/10019msec) 00:29:58.853 slat (usec): min=6, max=107, avg=35.45, stdev=22.65 00:29:58.853 clat (usec): min=13045, max=67858, avg=34280.06, stdev=6225.28 00:29:58.853 lat (usec): min=13075, max=67880, avg=34315.51, stdev=6223.84 00:29:58.853 clat percentiles (usec): 00:29:58.853 | 1.00th=[18482], 5.00th=[25822], 10.00th=[30540], 20.00th=[31851], 00:29:58.853 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:29:58.853 | 70.00th=[33817], 80.00th=[36439], 90.00th=[42730], 95.00th=[47449], 00:29:58.853 | 99.00th=[54264], 99.50th=[57410], 99.90th=[67634], 99.95th=[67634], 00:29:58.853 | 99.99th=[67634] 00:29:58.853 bw ( KiB/s): min= 1616, max= 1976, per=4.07%, avg=1849.60, stdev=94.11, samples=20 00:29:58.853 iops : min= 404, max= 494, avg=462.40, stdev=23.53, samples=20 00:29:58.853 lat (msec) : 20=1.70%, 50=95.19%, 100=3.10% 00:29:58.853 cpu : usr=98.79%, sys=0.78%, ctx=14, majf=0, minf=1634 00:29:58.853 IO depths : 1=2.9%, 2=5.8%, 4=15.6%, 8=65.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:29:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 complete : 0=0.0%, 4=91.8%, 8=3.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.853 filename1: (groupid=0, jobs=1): err= 0: pid=2627813: Fri Apr 26 16:12:37 2024 00:29:58.853 read: IOPS=465, BW=1863KiB/s (1908kB/s)(18.2MiB/10003msec) 00:29:58.853 slat (usec): min=6, max=133, avg=34.63, stdev=22.50 00:29:58.853 clat (usec): min=14504, max=70582, avg=34154.09, stdev=5549.81 00:29:58.853 lat (usec): min=14525, max=70606, avg=34188.73, stdev=5548.35 00:29:58.853 clat percentiles (usec): 00:29:58.853 | 1.00th=[20317], 5.00th=[28705], 10.00th=[31327], 20.00th=[32113], 00:29:58.853 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:58.853 | 70.00th=[33817], 80.00th=[34341], 90.00th=[40633], 95.00th=[45876], 00:29:58.853 | 99.00th=[53740], 99.50th=[56361], 99.90th=[70779], 99.95th=[70779], 00:29:58.853 | 99.99th=[70779] 00:29:58.853 bw ( KiB/s): min= 1664, max= 1968, per=4.08%, avg=1855.58, stdev=91.47, samples=19 00:29:58.853 iops : min= 416, max= 492, avg=463.89, stdev=22.87, samples=19 00:29:58.853 lat (msec) : 20=0.73%, 50=96.91%, 100=2.36% 00:29:58.853 cpu : usr=98.41%, sys=1.16%, ctx=18, majf=0, minf=1635 00:29:58.853 IO depths : 1=1.2%, 2=2.3%, 4=8.5%, 8=73.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:29:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 complete : 0=0.0%, 4=90.8%, 8=6.5%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 issued rwts: total=4659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.853 filename1: (groupid=0, jobs=1): err= 0: pid=2627814: Fri Apr 26 16:12:37 2024 00:29:58.853 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10031msec) 00:29:58.853 slat (usec): min=3, max=109, avg=31.01, stdev=21.29 00:29:58.853 clat (usec): min=10053, max=69860, avg=33246.48, stdev=5586.57 00:29:58.853 lat (usec): min=10062, max=69880, avg=33277.49, stdev=5586.91 00:29:58.853 clat percentiles (usec): 00:29:58.853 | 1.00th=[18220], 5.00th=[23987], 10.00th=[30540], 20.00th=[31851], 00:29:58.853 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:29:58.853 | 70.00th=[33424], 80.00th=[33817], 90.00th=[36963], 95.00th=[43779], 00:29:58.853 | 99.00th=[54789], 99.50th=[57410], 99.90th=[69731], 99.95th=[69731], 00:29:58.853 | 99.99th=[69731] 00:29:58.853 bw ( KiB/s): min= 1760, max= 2032, per=4.21%, avg=1915.20, stdev=67.50, samples=20 00:29:58.853 iops : min= 440, max= 508, avg=478.80, stdev=16.88, samples=20 00:29:58.853 lat (msec) : 20=2.65%, 50=95.29%, 100=2.06% 00:29:58.853 cpu : usr=98.69%, sys=0.85%, ctx=17, majf=0, minf=1631 00:29:58.853 IO depths : 1=0.4%, 2=2.0%, 4=9.9%, 8=73.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:29:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 complete : 0=0.0%, 4=91.2%, 8=5.2%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 issued rwts: total=4798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.853 filename1: (groupid=0, jobs=1): err= 0: pid=2627815: Fri Apr 26 16:12:37 2024 00:29:58.853 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10007msec) 00:29:58.853 slat (usec): min=6, max=104, avg=31.09, stdev=21.05 00:29:58.853 clat (usec): min=6816, max=72171, avg=34608.82, stdev=6593.84 00:29:58.853 lat (usec): min=6825, max=72195, avg=34639.91, stdev=6593.35 00:29:58.853 clat percentiles (usec): 00:29:58.853 | 1.00th=[19792], 5.00th=[25822], 10.00th=[30802], 20.00th=[32113], 00:29:58.853 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:58.853 | 70.00th=[33817], 80.00th=[36963], 90.00th=[43254], 95.00th=[47973], 00:29:58.853 | 99.00th=[56361], 99.50th=[60556], 99.90th=[71828], 99.95th=[71828], 00:29:58.853 | 99.99th=[71828] 00:29:58.853 bw ( KiB/s): min= 1696, max= 1968, per=4.03%, avg=1831.32, stdev=66.12, samples=19 00:29:58.853 iops : min= 424, max= 492, avg=457.79, stdev=16.59, samples=19 00:29:58.853 lat (msec) : 10=0.13%, 20=0.93%, 50=95.24%, 100=3.69% 00:29:58.853 cpu : usr=98.51%, sys=1.05%, ctx=19, majf=0, minf=1634 00:29:58.853 IO depths : 1=0.3%, 2=0.6%, 4=7.4%, 8=76.7%, 16=15.0%, 32=0.0%, >=64=0.0% 00:29:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 complete : 0=0.0%, 4=90.4%, 8=6.6%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 issued rwts: total=4605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.853 filename1: (groupid=0, jobs=1): err= 0: pid=2627817: Fri Apr 26 16:12:37 2024 00:29:58.853 read: IOPS=456, BW=1826KiB/s (1870kB/s)(17.8MiB/10004msec) 00:29:58.853 slat (usec): min=3, max=104, avg=32.49, stdev=22.59 00:29:58.853 clat (usec): min=10895, max=89521, avg=34872.79, stdev=6304.44 00:29:58.853 lat (usec): min=10927, max=89537, avg=34905.28, stdev=6302.41 00:29:58.853 clat percentiles (usec): 00:29:58.853 | 1.00th=[21890], 5.00th=[28181], 10.00th=[31327], 20.00th=[32113], 00:29:58.853 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:58.853 | 70.00th=[33817], 80.00th=[37487], 90.00th=[42730], 95.00th=[47973], 00:29:58.853 | 99.00th=[54789], 99.50th=[58459], 99.90th=[70779], 99.95th=[89654], 00:29:58.853 | 99.99th=[89654] 00:29:58.853 bw ( KiB/s): min= 1536, max= 1920, per=4.00%, avg=1818.11, stdev=81.27, samples=19 00:29:58.853 iops : min= 384, max= 480, avg=454.53, stdev=20.32, samples=19 00:29:58.853 lat (msec) : 20=0.74%, 50=95.40%, 100=3.85% 00:29:58.853 cpu : usr=98.60%, sys=0.97%, ctx=16, majf=0, minf=1632 00:29:58.853 IO depths : 1=0.4%, 2=0.8%, 4=7.8%, 8=76.4%, 16=14.6%, 32=0.0%, >=64=0.0% 00:29:58.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 complete : 0=0.0%, 4=90.3%, 8=6.5%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.853 issued rwts: total=4568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.853 filename1: (groupid=0, jobs=1): err= 0: pid=2627818: Fri Apr 26 16:12:37 2024 00:29:58.853 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10014msec) 00:29:58.853 slat (usec): min=3, max=105, avg=33.30, stdev=21.47 00:29:58.853 clat (usec): min=16664, max=81961, avg=32497.63, stdev=3659.58 00:29:58.853 lat (usec): min=16680, max=81976, avg=32530.93, stdev=3660.65 00:29:58.853 clat percentiles (usec): 00:29:58.853 | 1.00th=[18220], 5.00th=[29754], 10.00th=[31065], 20.00th=[31851], 00:29:58.854 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:29:58.854 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:58.854 | 99.00th=[45351], 99.50th=[49546], 99.90th=[64226], 99.95th=[64226], 00:29:58.854 | 99.99th=[82314] 00:29:58.854 bw ( KiB/s): min= 1784, max= 2176, per=4.29%, avg=1949.05, stdev=89.61, samples=19 00:29:58.854 iops : min= 446, max= 544, avg=487.26, stdev=22.40, samples=19 00:29:58.854 lat (msec) : 20=1.92%, 50=97.67%, 100=0.41% 00:29:58.854 cpu : usr=98.79%, sys=0.80%, ctx=12, majf=0, minf=1634 00:29:58.854 IO depths : 1=5.0%, 2=10.4%, 4=22.4%, 8=54.2%, 16=8.0%, 32=0.0%, >=64=0.0% 00:29:58.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 issued rwts: total=4885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.854 filename2: (groupid=0, jobs=1): err= 0: pid=2627819: Fri Apr 26 16:12:37 2024 00:29:58.854 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10009msec) 00:29:58.854 slat (usec): min=3, max=108, avg=44.22, stdev=21.97 00:29:58.854 clat (usec): min=14201, max=67990, avg=32779.10, stdev=2975.47 00:29:58.854 lat (usec): min=14209, max=68007, avg=32823.33, stdev=2974.42 00:29:58.854 clat percentiles (usec): 00:29:58.854 | 1.00th=[24773], 5.00th=[31065], 10.00th=[31327], 20.00th=[31851], 00:29:58.854 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:29:58.854 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:58.854 | 99.00th=[41157], 99.50th=[46924], 99.90th=[67634], 99.95th=[67634], 00:29:58.854 | 99.99th=[67634] 00:29:58.854 bw ( KiB/s): min= 1664, max= 2048, per=4.24%, avg=1926.74, stdev=78.26, samples=19 00:29:58.854 iops : min= 416, max= 512, avg=481.68, stdev=19.56, samples=19 00:29:58.854 lat (msec) : 20=0.66%, 50=98.97%, 100=0.37% 00:29:58.854 cpu : usr=98.78%, sys=0.79%, ctx=15, majf=0, minf=1636 00:29:58.854 IO depths : 1=4.3%, 2=9.1%, 4=20.7%, 8=57.7%, 16=8.2%, 32=0.0%, >=64=0.0% 00:29:58.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.854 filename2: (groupid=0, jobs=1): err= 0: pid=2627820: Fri Apr 26 16:12:37 2024 00:29:58.854 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10035msec) 00:29:58.854 slat (usec): min=3, max=106, avg=28.42, stdev=19.24 00:29:58.854 clat (usec): min=6402, max=84518, avg=34428.73, stdev=7763.17 00:29:58.854 lat (usec): min=6426, max=84536, avg=34457.14, stdev=7762.82 00:29:58.854 clat percentiles (usec): 00:29:58.854 | 1.00th=[11994], 5.00th=[22676], 10.00th=[28181], 20.00th=[31851], 00:29:58.854 | 30.00th=[32375], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:58.854 | 70.00th=[33817], 80.00th=[37487], 90.00th=[44827], 95.00th=[50070], 00:29:58.854 | 99.00th=[57410], 99.50th=[58983], 99.90th=[84411], 99.95th=[84411], 00:29:58.854 | 99.99th=[84411] 00:29:58.854 bw ( KiB/s): min= 1688, max= 2000, per=4.06%, avg=1846.65, stdev=83.05, samples=20 00:29:58.854 iops : min= 422, max= 500, avg=461.65, stdev=20.76, samples=20 00:29:58.854 lat (msec) : 10=0.17%, 20=3.21%, 50=91.83%, 100=4.79% 00:29:58.854 cpu : usr=96.70%, sys=1.64%, ctx=81, majf=0, minf=1636 00:29:58.854 IO depths : 1=1.0%, 2=2.5%, 4=12.8%, 8=70.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:29:58.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 complete : 0=0.0%, 4=91.7%, 8=4.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 issued rwts: total=4637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.854 filename2: (groupid=0, jobs=1): err= 0: pid=2627821: Fri Apr 26 16:12:37 2024 00:29:58.854 read: IOPS=455, BW=1823KiB/s (1866kB/s)(17.8MiB/10006msec) 00:29:58.854 slat (usec): min=3, max=101, avg=29.29, stdev=20.28 00:29:58.854 clat (usec): min=16077, max=73954, avg=34967.95, stdev=5927.86 00:29:58.854 lat (usec): min=16087, max=73970, avg=34997.23, stdev=5926.57 00:29:58.854 clat percentiles (usec): 00:29:58.854 | 1.00th=[20579], 5.00th=[30802], 10.00th=[31589], 20.00th=[32113], 00:29:58.854 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:29:58.854 | 70.00th=[33817], 80.00th=[36963], 90.00th=[43254], 95.00th=[47449], 00:29:58.854 | 99.00th=[52691], 99.50th=[56361], 99.90th=[73925], 99.95th=[73925], 00:29:58.854 | 99.99th=[73925] 00:29:58.854 bw ( KiB/s): min= 1440, max= 2000, per=3.98%, avg=1811.79, stdev=154.15, samples=19 00:29:58.854 iops : min= 360, max= 500, avg=452.95, stdev=38.54, samples=19 00:29:58.854 lat (msec) : 20=0.57%, 50=96.82%, 100=2.61% 00:29:58.854 cpu : usr=98.46%, sys=1.10%, ctx=22, majf=0, minf=1636 00:29:58.854 IO depths : 1=0.6%, 2=1.2%, 4=7.6%, 8=75.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:29:58.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 complete : 0=0.0%, 4=91.0%, 8=6.8%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 issued rwts: total=4559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.854 filename2: (groupid=0, jobs=1): err= 0: pid=2627822: Fri Apr 26 16:12:37 2024 00:29:58.854 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.9MiB/10027msec) 00:29:58.854 slat (usec): min=3, max=109, avg=37.68, stdev=22.08 00:29:58.854 clat (usec): min=15693, max=56933, avg=32922.99, stdev=3440.80 00:29:58.854 lat (usec): min=15703, max=56990, avg=32960.67, stdev=3440.35 00:29:58.854 clat percentiles (usec): 00:29:58.854 | 1.00th=[20841], 5.00th=[30540], 10.00th=[31327], 20.00th=[31851], 00:29:58.854 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:29:58.854 | 70.00th=[33162], 80.00th=[33424], 90.00th=[34341], 95.00th=[38011], 00:29:58.854 | 99.00th=[49021], 99.50th=[51119], 99.90th=[55837], 99.95th=[56361], 00:29:58.854 | 99.99th=[56886] 00:29:58.854 bw ( KiB/s): min= 1760, max= 2048, per=4.23%, avg=1924.85, stdev=81.41, samples=20 00:29:58.854 iops : min= 440, max= 512, avg=481.20, stdev=20.36, samples=20 00:29:58.854 lat (msec) : 20=0.58%, 50=98.78%, 100=0.64% 00:29:58.854 cpu : usr=98.65%, sys=0.91%, ctx=19, majf=0, minf=1637 00:29:58.854 IO depths : 1=2.7%, 2=6.7%, 4=20.3%, 8=60.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:29:58.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.854 filename2: (groupid=0, jobs=1): err= 0: pid=2627824: Fri Apr 26 16:12:37 2024 00:29:58.854 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10005msec) 00:29:58.854 slat (usec): min=4, max=110, avg=32.97, stdev=21.67 00:29:58.854 clat (usec): min=16718, max=72533, avg=32999.16, stdev=3397.37 00:29:58.854 lat (usec): min=16731, max=72560, avg=33032.13, stdev=3397.16 00:29:58.854 clat percentiles (usec): 00:29:58.854 | 1.00th=[22152], 5.00th=[30540], 10.00th=[31327], 20.00th=[31851], 00:29:58.854 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:29:58.854 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[36963], 00:29:58.854 | 99.00th=[47449], 99.50th=[54789], 99.90th=[61604], 99.95th=[72877], 00:29:58.854 | 99.99th=[72877] 00:29:58.854 bw ( KiB/s): min= 1792, max= 2048, per=4.23%, avg=1925.89, stdev=69.58, samples=19 00:29:58.854 iops : min= 448, max= 512, avg=481.47, stdev=17.40, samples=19 00:29:58.854 lat (msec) : 20=0.21%, 50=99.09%, 100=0.71% 00:29:58.854 cpu : usr=98.36%, sys=1.18%, ctx=18, majf=0, minf=1635 00:29:58.854 IO depths : 1=4.2%, 2=8.5%, 4=18.6%, 8=59.5%, 16=9.2%, 32=0.0%, >=64=0.0% 00:29:58.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 complete : 0=0.0%, 4=92.8%, 8=2.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.854 filename2: (groupid=0, jobs=1): err= 0: pid=2627825: Fri Apr 26 16:12:37 2024 00:29:58.854 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10015msec) 00:29:58.854 slat (usec): min=3, max=150, avg=29.36, stdev=21.36 00:29:58.854 clat (usec): min=11740, max=59131, avg=32104.36, stdev=4109.78 00:29:58.854 lat (usec): min=11748, max=59143, avg=32133.73, stdev=4113.12 00:29:58.854 clat percentiles (usec): 00:29:58.854 | 1.00th=[17171], 5.00th=[23200], 10.00th=[28443], 20.00th=[31589], 00:29:58.854 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:29:58.854 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[35914], 00:29:58.854 | 99.00th=[42730], 99.50th=[50594], 99.90th=[57410], 99.95th=[58983], 00:29:58.854 | 99.99th=[58983] 00:29:58.854 bw ( KiB/s): min= 1920, max= 2192, per=4.35%, avg=1977.60, stdev=92.19, samples=20 00:29:58.854 iops : min= 480, max= 548, avg=494.40, stdev=23.05, samples=20 00:29:58.854 lat (msec) : 20=1.75%, 50=97.70%, 100=0.54% 00:29:58.854 cpu : usr=98.37%, sys=1.10%, ctx=102, majf=0, minf=1637 00:29:58.854 IO depths : 1=2.1%, 2=4.2%, 4=12.0%, 8=70.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:29:58.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 complete : 0=0.0%, 4=90.9%, 8=4.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.854 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.854 filename2: (groupid=0, jobs=1): err= 0: pid=2627826: Fri Apr 26 16:12:37 2024 00:29:58.854 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10019msec) 00:29:58.854 slat (usec): min=3, max=152, avg=40.87, stdev=22.49 00:29:58.854 clat (usec): min=14260, max=62542, avg=32847.23, stdev=2552.88 00:29:58.854 lat (usec): min=14270, max=62559, avg=32888.10, stdev=2550.72 00:29:58.854 clat percentiles (usec): 00:29:58.854 | 1.00th=[25297], 5.00th=[31065], 10.00th=[31327], 20.00th=[31851], 00:29:58.854 | 30.00th=[32113], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:29:58.854 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:29:58.854 | 99.00th=[43779], 99.50th=[49021], 99.90th=[56361], 99.95th=[62653], 00:29:58.854 | 99.99th=[62653] 00:29:58.855 bw ( KiB/s): min= 1744, max= 2048, per=4.24%, avg=1926.40, stdev=58.82, samples=20 00:29:58.855 iops : min= 436, max= 512, avg=481.60, stdev=14.71, samples=20 00:29:58.855 lat (msec) : 20=0.10%, 50=99.44%, 100=0.46% 00:29:58.855 cpu : usr=98.74%, sys=0.83%, ctx=14, majf=0, minf=1636 00:29:58.855 IO depths : 1=4.3%, 2=10.3%, 4=24.4%, 8=52.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:29:58.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.855 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.855 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.855 filename2: (groupid=0, jobs=1): err= 0: pid=2627827: Fri Apr 26 16:12:37 2024 00:29:58.855 read: IOPS=468, BW=1876KiB/s (1921kB/s)(18.4MiB/10022msec) 00:29:58.855 slat (usec): min=4, max=172, avg=36.70, stdev=20.44 00:29:58.855 clat (usec): min=13385, max=61553, avg=33826.11, stdev=5301.39 00:29:58.855 lat (usec): min=13422, max=61563, avg=33862.81, stdev=5300.48 00:29:58.855 clat percentiles (usec): 00:29:58.855 | 1.00th=[19530], 5.00th=[29492], 10.00th=[31065], 20.00th=[31851], 00:29:58.855 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[33162], 00:29:58.855 | 70.00th=[33424], 80.00th=[33817], 90.00th=[40633], 95.00th=[45876], 00:29:58.855 | 99.00th=[53216], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:29:58.855 | 99.99th=[61604] 00:29:58.855 bw ( KiB/s): min= 1664, max= 2016, per=4.12%, avg=1873.60, stdev=78.19, samples=20 00:29:58.855 iops : min= 416, max= 504, avg=468.40, stdev=19.55, samples=20 00:29:58.855 lat (msec) : 20=1.19%, 50=96.60%, 100=2.21% 00:29:58.855 cpu : usr=94.77%, sys=2.36%, ctx=89, majf=0, minf=1633 00:29:58.855 IO depths : 1=3.7%, 2=7.8%, 4=18.9%, 8=59.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:29:58.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.855 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.855 issued rwts: total=4700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:58.855 00:29:58.855 Run status group 0 (all jobs): 00:29:58.855 READ: bw=44.4MiB/s (46.6MB/s), 1823KiB/s-1981KiB/s (1866kB/s-2029kB/s), io=446MiB (467MB), run=10002-10036msec 00:29:59.425 ----------------------------------------------------- 00:29:59.425 Suppressions used: 00:29:59.425 count bytes template 00:29:59.425 45 402 /usr/src/fio/parse.c 00:29:59.425 1 8 libtcmalloc_minimal.so 00:29:59.425 1 904 libcrypto.so 00:29:59.425 ----------------------------------------------------- 00:29:59.425 00:29:59.425 16:12:38 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:59.425 16:12:38 -- target/dif.sh@43 -- # local sub 00:29:59.425 16:12:38 -- target/dif.sh@45 -- # for sub in "$@" 00:29:59.425 16:12:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:59.425 16:12:38 -- target/dif.sh@36 -- # local sub_id=0 00:29:59.425 16:12:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:59.425 16:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.425 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.425 16:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.425 16:12:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:59.425 16:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.425 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.425 16:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.425 16:12:38 -- target/dif.sh@45 -- # for sub in "$@" 00:29:59.425 16:12:38 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:59.425 16:12:38 -- target/dif.sh@36 -- # local sub_id=1 00:29:59.425 16:12:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:59.425 16:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.425 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.425 16:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.425 16:12:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:59.425 16:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.425 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.425 16:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.425 16:12:38 -- target/dif.sh@45 -- # for sub in "$@" 00:29:59.425 16:12:38 -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:59.425 16:12:38 -- target/dif.sh@36 -- # local sub_id=2 00:29:59.425 16:12:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:59.425 16:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.425 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.425 16:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.425 16:12:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:59.425 16:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.425 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.425 16:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.425 16:12:38 -- target/dif.sh@115 -- # NULL_DIF=1 00:29:59.425 16:12:38 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:59.425 16:12:38 -- target/dif.sh@115 -- # numjobs=2 00:29:59.425 16:12:38 -- target/dif.sh@115 -- # iodepth=8 00:29:59.425 16:12:38 -- target/dif.sh@115 -- # runtime=5 00:29:59.425 16:12:38 -- target/dif.sh@115 -- # files=1 00:29:59.425 16:12:38 -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:59.425 16:12:38 -- target/dif.sh@28 -- # local sub 00:29:59.425 16:12:38 -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.425 16:12:38 -- target/dif.sh@31 -- # create_subsystem 0 00:29:59.425 16:12:38 -- target/dif.sh@18 -- # local sub_id=0 00:29:59.425 16:12:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:59.425 16:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.426 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:29:59.426 bdev_null0 00:29:59.426 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.426 16:12:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:59.426 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.426 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.426 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.426 16:12:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:59.426 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.426 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.426 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.426 16:12:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:59.426 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.426 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.426 [2024-04-26 16:12:39.024526] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.426 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.426 16:12:39 -- target/dif.sh@30 -- # for sub in "$@" 00:29:59.426 16:12:39 -- target/dif.sh@31 -- # create_subsystem 1 00:29:59.426 16:12:39 -- target/dif.sh@18 -- # local sub_id=1 00:29:59.426 16:12:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:59.426 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.426 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.426 bdev_null1 00:29:59.426 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.426 16:12:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:59.426 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.426 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.426 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.426 16:12:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:59.426 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.426 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.426 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.426 16:12:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.426 16:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:59.426 16:12:39 -- common/autotest_common.sh@10 -- # set +x 00:29:59.426 16:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:59.426 16:12:39 -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:59.426 16:12:39 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:59.426 16:12:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:59.426 16:12:39 -- nvmf/common.sh@521 -- # config=() 00:29:59.426 16:12:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.426 16:12:39 -- nvmf/common.sh@521 -- # local subsystem config 00:29:59.426 16:12:39 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.426 16:12:39 -- target/dif.sh@82 -- # gen_fio_conf 00:29:59.426 16:12:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:59.426 16:12:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:59.426 16:12:39 -- target/dif.sh@54 -- # local file 00:29:59.426 16:12:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:59.426 { 00:29:59.426 "params": { 00:29:59.426 "name": "Nvme$subsystem", 00:29:59.426 "trtype": "$TEST_TRANSPORT", 00:29:59.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.426 "adrfam": "ipv4", 00:29:59.426 "trsvcid": "$NVMF_PORT", 00:29:59.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.426 "hdgst": ${hdgst:-false}, 00:29:59.426 "ddgst": ${ddgst:-false} 00:29:59.426 }, 00:29:59.426 "method": "bdev_nvme_attach_controller" 00:29:59.426 } 00:29:59.426 EOF 00:29:59.426 )") 00:29:59.426 16:12:39 -- target/dif.sh@56 -- # cat 00:29:59.426 16:12:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:59.426 16:12:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:59.426 16:12:39 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.426 16:12:39 -- common/autotest_common.sh@1327 -- # shift 00:29:59.426 16:12:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:59.426 16:12:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:59.426 16:12:39 -- nvmf/common.sh@543 -- # cat 00:29:59.426 16:12:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:59.426 16:12:39 -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.426 16:12:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:59.426 16:12:39 -- target/dif.sh@73 -- # cat 00:29:59.426 16:12:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:59.426 16:12:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:59.426 16:12:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:59.426 16:12:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:59.426 { 00:29:59.426 "params": { 00:29:59.426 "name": "Nvme$subsystem", 00:29:59.426 "trtype": "$TEST_TRANSPORT", 00:29:59.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.426 "adrfam": "ipv4", 00:29:59.426 "trsvcid": "$NVMF_PORT", 00:29:59.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.426 "hdgst": ${hdgst:-false}, 00:29:59.426 "ddgst": ${ddgst:-false} 00:29:59.426 }, 00:29:59.426 "method": "bdev_nvme_attach_controller" 00:29:59.426 } 00:29:59.426 EOF 00:29:59.426 )") 00:29:59.426 16:12:39 -- target/dif.sh@72 -- # (( file++ )) 00:29:59.426 16:12:39 -- target/dif.sh@72 -- # (( file <= files )) 00:29:59.426 16:12:39 -- nvmf/common.sh@543 -- # cat 00:29:59.426 16:12:39 -- nvmf/common.sh@545 -- # jq . 00:29:59.426 16:12:39 -- nvmf/common.sh@546 -- # IFS=, 00:29:59.426 16:12:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:59.426 "params": { 00:29:59.426 "name": "Nvme0", 00:29:59.426 "trtype": "tcp", 00:29:59.426 "traddr": "10.0.0.2", 00:29:59.426 "adrfam": "ipv4", 00:29:59.426 "trsvcid": "4420", 00:29:59.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.426 "hdgst": false, 00:29:59.426 "ddgst": false 00:29:59.426 }, 00:29:59.426 "method": "bdev_nvme_attach_controller" 00:29:59.426 },{ 00:29:59.426 "params": { 00:29:59.426 "name": "Nvme1", 00:29:59.426 "trtype": "tcp", 00:29:59.426 "traddr": "10.0.0.2", 00:29:59.426 "adrfam": "ipv4", 00:29:59.426 "trsvcid": "4420", 00:29:59.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:59.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:59.426 "hdgst": false, 00:29:59.426 "ddgst": false 00:29:59.426 }, 00:29:59.426 "method": "bdev_nvme_attach_controller" 00:29:59.426 }' 00:29:59.426 16:12:39 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:59.426 16:12:39 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:59.426 16:12:39 -- common/autotest_common.sh@1333 -- # break 00:29:59.426 16:12:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:59.426 16:12:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:00.031 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:00.031 ... 00:30:00.031 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:00.031 ... 00:30:00.031 fio-3.35 00:30:00.031 Starting 4 threads 00:30:00.031 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.596 00:30:06.596 filename0: (groupid=0, jobs=1): err= 0: pid=2629930: Fri Apr 26 16:12:45 2024 00:30:06.596 read: IOPS=2222, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5003msec) 00:30:06.596 slat (nsec): min=6930, max=65957, avg=19662.72, stdev=12420.72 00:30:06.596 clat (usec): min=2129, max=12508, avg=3550.57, stdev=481.34 00:30:06.596 lat (usec): min=2136, max=12542, avg=3570.23, stdev=480.83 00:30:06.596 clat percentiles (usec): 00:30:06.596 | 1.00th=[ 2573], 5.00th=[ 2835], 10.00th=[ 3064], 20.00th=[ 3294], 00:30:06.596 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3556], 00:30:06.596 | 70.00th=[ 3654], 80.00th=[ 3818], 90.00th=[ 4047], 95.00th=[ 4293], 00:30:06.596 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5669], 99.95th=[12125], 00:30:06.596 | 99.99th=[12256] 00:30:06.596 bw ( KiB/s): min=17024, max=18640, per=25.22%, avg=17785.60, stdev=450.86, samples=10 00:30:06.596 iops : min= 2128, max= 2330, avg=2223.20, stdev=56.36, samples=10 00:30:06.596 lat (msec) : 4=88.87%, 10=11.06%, 20=0.07% 00:30:06.596 cpu : usr=97.56%, sys=2.08%, ctx=8, majf=0, minf=1637 00:30:06.596 IO depths : 1=0.1%, 2=0.8%, 4=65.6%, 8=33.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.596 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.596 issued rwts: total=11121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:06.596 filename0: (groupid=0, jobs=1): err= 0: pid=2629931: Fri Apr 26 16:12:45 2024 00:30:06.596 read: IOPS=2231, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5002msec) 00:30:06.596 slat (nsec): min=5594, max=62935, avg=14363.26, stdev=8255.83 00:30:06.596 clat (usec): min=1778, max=10342, avg=3551.76, stdev=442.18 00:30:06.596 lat (usec): min=1785, max=10363, avg=3566.13, stdev=441.73 00:30:06.596 clat percentiles (usec): 00:30:06.596 | 1.00th=[ 2573], 5.00th=[ 2835], 10.00th=[ 3064], 20.00th=[ 3294], 00:30:06.596 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3556], 00:30:06.596 | 70.00th=[ 3654], 80.00th=[ 3818], 90.00th=[ 4015], 95.00th=[ 4293], 00:30:06.596 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[10159], 00:30:06.596 | 99.99th=[10290] 00:30:06.596 bw ( KiB/s): min=17186, max=18688, per=25.32%, avg=17853.00, stdev=409.84, samples=10 00:30:06.596 iops : min= 2148, max= 2336, avg=2231.60, stdev=51.28, samples=10 00:30:06.596 lat (msec) : 2=0.03%, 4=89.10%, 10=10.81%, 20=0.07% 00:30:06.596 cpu : usr=97.06%, sys=2.52%, ctx=9, majf=0, minf=1636 00:30:06.596 IO depths : 1=0.1%, 2=0.6%, 4=64.3%, 8=35.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.596 complete : 0=0.0%, 4=98.0%, 8=2.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.596 issued rwts: total=11161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:06.596 filename1: (groupid=0, jobs=1): err= 0: pid=2629932: Fri Apr 26 16:12:45 2024 00:30:06.596 read: IOPS=2193, BW=17.1MiB/s (18.0MB/s)(86.5MiB/5044msec) 00:30:06.596 slat (nsec): min=4917, max=66944, avg=13359.97, stdev=8026.10 00:30:06.596 clat (usec): min=2049, max=53418, avg=3595.19, stdev=1556.86 00:30:06.596 lat (usec): min=2057, max=53440, avg=3608.55, stdev=1556.64 00:30:06.596 clat percentiles (usec): 00:30:06.596 | 1.00th=[ 2540], 5.00th=[ 2835], 10.00th=[ 3064], 20.00th=[ 3294], 00:30:06.596 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3556], 00:30:06.596 | 70.00th=[ 3654], 80.00th=[ 3818], 90.00th=[ 4047], 95.00th=[ 4293], 00:30:06.596 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5538], 99.95th=[53216], 00:30:06.596 | 99.99th=[53216] 00:30:06.596 bw ( KiB/s): min=15104, max=18608, per=25.10%, avg=17700.80, stdev=972.13, samples=10 00:30:06.596 iops : min= 1888, max= 2326, avg=2212.60, stdev=121.52, samples=10 00:30:06.596 lat (msec) : 4=89.06%, 10=10.84%, 50=0.03%, 100=0.07% 00:30:06.596 cpu : usr=97.24%, sys=2.38%, ctx=9, majf=0, minf=1637 00:30:06.596 IO depths : 1=0.1%, 2=0.6%, 4=65.1%, 8=34.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.596 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.596 issued rwts: total=11066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:06.596 filename1: (groupid=0, jobs=1): err= 0: pid=2629933: Fri Apr 26 16:12:45 2024 00:30:06.596 read: IOPS=2221, BW=17.4MiB/s (18.2MB/s)(86.8MiB/5001msec) 00:30:06.596 slat (nsec): min=5307, max=74690, avg=14065.07, stdev=8820.91 00:30:06.596 clat (usec): min=2060, max=6591, avg=3567.05, stdev=411.81 00:30:06.596 lat (usec): min=2073, max=6611, avg=3581.11, stdev=411.56 00:30:06.596 clat percentiles (usec): 00:30:06.596 | 1.00th=[ 2540], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3326], 00:30:06.596 | 30.00th=[ 3425], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3589], 00:30:06.596 | 70.00th=[ 3687], 80.00th=[ 3851], 90.00th=[ 4080], 95.00th=[ 4293], 00:30:06.596 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5473], 99.95th=[ 6390], 00:30:06.596 | 99.99th=[ 6521] 00:30:06.596 bw ( KiB/s): min=17024, max=18640, per=25.20%, avg=17767.11, stdev=456.36, samples=9 00:30:06.596 iops : min= 2128, max= 2330, avg=2220.89, stdev=57.04, samples=9 00:30:06.596 lat (msec) : 4=88.41%, 10=11.59% 00:30:06.596 cpu : usr=97.36%, sys=2.26%, ctx=9, majf=0, minf=1637 00:30:06.596 IO depths : 1=0.1%, 2=0.7%, 4=64.3%, 8=34.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:06.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.597 complete : 0=0.0%, 4=98.0%, 8=2.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.597 issued rwts: total=11112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.597 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:06.597 00:30:06.597 Run status group 0 (all jobs): 00:30:06.597 READ: bw=68.9MiB/s (72.2MB/s), 17.1MiB/s-17.4MiB/s (18.0MB/s-18.3MB/s), io=347MiB (364MB), run=5001-5044msec 00:30:07.160 ----------------------------------------------------- 00:30:07.160 Suppressions used: 00:30:07.160 count bytes template 00:30:07.160 6 52 /usr/src/fio/parse.c 00:30:07.160 1 8 libtcmalloc_minimal.so 00:30:07.160 1 904 libcrypto.so 00:30:07.160 ----------------------------------------------------- 00:30:07.160 00:30:07.160 16:12:46 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:07.160 16:12:46 -- target/dif.sh@43 -- # local sub 00:30:07.160 16:12:46 -- target/dif.sh@45 -- # for sub in "$@" 00:30:07.160 16:12:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:07.160 16:12:46 -- target/dif.sh@36 -- # local sub_id=0 00:30:07.160 16:12:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:07.160 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.160 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.160 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.160 16:12:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:07.160 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.160 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.160 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.160 16:12:46 -- target/dif.sh@45 -- # for sub in "$@" 00:30:07.160 16:12:46 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:07.160 16:12:46 -- target/dif.sh@36 -- # local sub_id=1 00:30:07.160 16:12:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.160 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.160 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.160 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.160 16:12:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:07.160 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.160 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.160 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.160 00:30:07.160 real 0m28.447s 00:30:07.160 user 4m55.718s 00:30:07.160 sys 0m5.272s 00:30:07.160 16:12:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:07.160 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.160 ************************************ 00:30:07.160 END TEST fio_dif_rand_params 00:30:07.160 ************************************ 00:30:07.160 16:12:46 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:07.160 16:12:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:07.160 16:12:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:07.160 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.160 ************************************ 00:30:07.160 START TEST fio_dif_digest 00:30:07.160 ************************************ 00:30:07.160 16:12:46 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:30:07.160 16:12:46 -- target/dif.sh@123 -- # local NULL_DIF 00:30:07.160 16:12:46 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:07.160 16:12:46 -- target/dif.sh@125 -- # local hdgst ddgst 00:30:07.160 16:12:46 -- target/dif.sh@127 -- # NULL_DIF=3 00:30:07.160 16:12:46 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:07.160 16:12:46 -- target/dif.sh@127 -- # numjobs=3 00:30:07.160 16:12:46 -- target/dif.sh@127 -- # iodepth=3 00:30:07.160 16:12:46 -- target/dif.sh@127 -- # runtime=10 00:30:07.160 16:12:46 -- target/dif.sh@128 -- # hdgst=true 00:30:07.160 16:12:46 -- target/dif.sh@128 -- # ddgst=true 00:30:07.160 16:12:46 -- target/dif.sh@130 -- # create_subsystems 0 00:30:07.160 16:12:46 -- target/dif.sh@28 -- # local sub 00:30:07.160 16:12:46 -- target/dif.sh@30 -- # for sub in "$@" 00:30:07.160 16:12:46 -- target/dif.sh@31 -- # create_subsystem 0 00:30:07.160 16:12:46 -- target/dif.sh@18 -- # local sub_id=0 00:30:07.160 16:12:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:07.160 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.160 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.160 bdev_null0 00:30:07.160 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.161 16:12:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:07.161 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.161 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.161 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.161 16:12:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:07.161 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.161 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.161 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.161 16:12:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:07.161 16:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:07.161 16:12:46 -- common/autotest_common.sh@10 -- # set +x 00:30:07.161 [2024-04-26 16:12:46.822440] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.161 16:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:07.161 16:12:46 -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:07.161 16:12:46 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:07.161 16:12:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:07.161 16:12:46 -- nvmf/common.sh@521 -- # config=() 00:30:07.161 16:12:46 -- target/dif.sh@82 -- # gen_fio_conf 00:30:07.161 16:12:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.161 16:12:46 -- target/dif.sh@54 -- # local file 00:30:07.161 16:12:46 -- nvmf/common.sh@521 -- # local subsystem config 00:30:07.161 16:12:46 -- target/dif.sh@56 -- # cat 00:30:07.161 16:12:46 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.161 16:12:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:07.161 16:12:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:07.161 16:12:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:07.161 { 00:30:07.161 "params": { 00:30:07.161 "name": "Nvme$subsystem", 00:30:07.161 "trtype": "$TEST_TRANSPORT", 00:30:07.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.161 "adrfam": "ipv4", 00:30:07.161 "trsvcid": "$NVMF_PORT", 00:30:07.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.161 "hdgst": ${hdgst:-false}, 00:30:07.161 "ddgst": ${ddgst:-false} 00:30:07.161 }, 00:30:07.161 "method": "bdev_nvme_attach_controller" 00:30:07.161 } 00:30:07.161 EOF 00:30:07.161 )") 00:30:07.161 16:12:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:07.161 16:12:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:07.161 16:12:46 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:07.161 16:12:46 -- common/autotest_common.sh@1327 -- # shift 00:30:07.161 16:12:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:07.161 16:12:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:07.161 16:12:46 -- nvmf/common.sh@543 -- # cat 00:30:07.161 16:12:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:07.161 16:12:46 -- target/dif.sh@72 -- # (( file <= files )) 00:30:07.161 16:12:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:07.161 16:12:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:07.161 16:12:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:07.161 16:12:46 -- nvmf/common.sh@545 -- # jq . 00:30:07.161 16:12:46 -- nvmf/common.sh@546 -- # IFS=, 00:30:07.161 16:12:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:07.161 "params": { 00:30:07.161 "name": "Nvme0", 00:30:07.161 "trtype": "tcp", 00:30:07.161 "traddr": "10.0.0.2", 00:30:07.161 "adrfam": "ipv4", 00:30:07.161 "trsvcid": "4420", 00:30:07.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:07.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:07.161 "hdgst": true, 00:30:07.161 "ddgst": true 00:30:07.161 }, 00:30:07.161 "method": "bdev_nvme_attach_controller" 00:30:07.161 }' 00:30:07.455 16:12:46 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:07.455 16:12:46 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:07.455 16:12:46 -- common/autotest_common.sh@1333 -- # break 00:30:07.455 16:12:46 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:07.455 16:12:46 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.716 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:07.716 ... 00:30:07.716 fio-3.35 00:30:07.716 Starting 3 threads 00:30:07.716 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.916 00:30:19.916 filename0: (groupid=0, jobs=1): err= 0: pid=2631430: Fri Apr 26 16:12:58 2024 00:30:19.916 read: IOPS=222, BW=27.9MiB/s (29.2MB/s)(279MiB/10004msec) 00:30:19.916 slat (nsec): min=7445, max=45751, avg=16886.76, stdev=5590.69 00:30:19.916 clat (usec): min=6277, max=61398, avg=13437.11, stdev=7595.19 00:30:19.916 lat (usec): min=6287, max=61421, avg=13454.00, stdev=7595.92 00:30:19.916 clat percentiles (usec): 00:30:19.916 | 1.00th=[ 7701], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10814], 00:30:19.916 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:30:19.916 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14353], 95.00th=[15533], 00:30:19.916 | 99.00th=[57410], 99.50th=[58459], 99.90th=[60031], 99.95th=[61080], 00:30:19.916 | 99.99th=[61604] 00:30:19.916 bw ( KiB/s): min=20736, max=33024, per=33.59%, avg=28658.53, stdev=3425.55, samples=19 00:30:19.916 iops : min= 162, max= 258, avg=223.89, stdev=26.76, samples=19 00:30:19.916 lat (msec) : 10=12.96%, 20=84.08%, 50=0.13%, 100=2.83% 00:30:19.916 cpu : usr=96.85%, sys=2.78%, ctx=15, majf=0, minf=1632 00:30:19.916 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.916 issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:19.916 filename0: (groupid=0, jobs=1): err= 0: pid=2631431: Fri Apr 26 16:12:58 2024 00:30:19.916 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(237MiB/10006msec) 00:30:19.916 slat (nsec): min=7596, max=49006, avg=24987.59, stdev=8610.05 00:30:19.916 clat (usec): min=7217, max=97994, avg=15775.26, stdev=10348.00 00:30:19.916 lat (usec): min=7239, max=98025, avg=15800.25, stdev=10348.26 00:30:19.916 clat percentiles (usec): 00:30:19.916 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[10945], 20.00th=[12125], 00:30:19.916 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[13960], 00:30:19.916 | 70.00th=[14353], 80.00th=[15008], 90.00th=[16319], 95.00th=[53740], 00:30:19.916 | 99.00th=[58983], 99.50th=[59507], 99.90th=[95945], 99.95th=[98042], 00:30:19.916 | 99.99th=[98042] 00:30:19.916 bw ( KiB/s): min=17920, max=30720, per=28.46%, avg=24281.60, stdev=3700.74, samples=20 00:30:19.916 iops : min= 140, max= 240, avg=189.70, stdev=28.91, samples=20 00:30:19.916 lat (msec) : 10=5.21%, 20=89.20%, 50=0.21%, 100=5.37% 00:30:19.916 cpu : usr=97.01%, sys=2.64%, ctx=15, majf=0, minf=1635 00:30:19.916 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.916 issued rwts: total=1899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:19.916 filename0: (groupid=0, jobs=1): err= 0: pid=2631432: Fri Apr 26 16:12:58 2024 00:30:19.916 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(321MiB/10047msec) 00:30:19.916 slat (nsec): min=7100, max=81672, avg=17767.30, stdev=5930.09 00:30:19.916 clat (usec): min=5484, max=57561, avg=11702.60, stdev=3769.79 00:30:19.916 lat (usec): min=5495, max=57590, avg=11720.36, stdev=3770.47 00:30:19.916 clat percentiles (usec): 00:30:19.916 | 1.00th=[ 6194], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9503], 00:30:19.916 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:30:19.916 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13698], 95.00th=[14222], 00:30:19.916 | 99.00th=[15664], 99.50th=[53216], 99.90th=[56886], 99.95th=[57410], 00:30:19.916 | 99.99th=[57410] 00:30:19.916 bw ( KiB/s): min=25344, max=39680, per=38.49%, avg=32832.00, stdev=3466.46, samples=20 00:30:19.916 iops : min= 198, max= 310, avg=256.50, stdev=27.08, samples=20 00:30:19.916 lat (msec) : 10=25.20%, 20=74.25%, 50=0.04%, 100=0.51% 00:30:19.916 cpu : usr=93.50%, sys=4.36%, ctx=975, majf=0, minf=1639 00:30:19.916 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.916 issued rwts: total=2567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.916 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:19.916 00:30:19.916 Run status group 0 (all jobs): 00:30:19.917 READ: bw=83.3MiB/s (87.4MB/s), 23.7MiB/s-31.9MiB/s (24.9MB/s-33.5MB/s), io=837MiB (878MB), run=10004-10047msec 00:30:19.917 ----------------------------------------------------- 00:30:19.917 Suppressions used: 00:30:19.917 count bytes template 00:30:19.917 5 44 /usr/src/fio/parse.c 00:30:19.917 1 8 libtcmalloc_minimal.so 00:30:19.917 1 904 libcrypto.so 00:30:19.917 ----------------------------------------------------- 00:30:19.917 00:30:19.917 16:12:59 -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:19.917 16:12:59 -- target/dif.sh@43 -- # local sub 00:30:19.917 16:12:59 -- target/dif.sh@45 -- # for sub in "$@" 00:30:19.917 16:12:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:19.917 16:12:59 -- target/dif.sh@36 -- # local sub_id=0 00:30:19.917 16:12:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:19.917 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.917 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:30:19.917 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.917 16:12:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:19.917 16:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.917 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:30:19.917 16:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.917 00:30:19.917 real 0m12.477s 00:30:19.917 user 0m36.142s 00:30:19.917 sys 0m1.570s 00:30:19.917 16:12:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:19.917 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:30:19.917 ************************************ 00:30:19.917 END TEST fio_dif_digest 00:30:19.917 ************************************ 00:30:19.917 16:12:59 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:19.917 16:12:59 -- target/dif.sh@147 -- # nvmftestfini 00:30:19.917 16:12:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:19.917 16:12:59 -- nvmf/common.sh@117 -- # sync 00:30:19.917 16:12:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:19.917 16:12:59 -- nvmf/common.sh@120 -- # set +e 00:30:19.917 16:12:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:19.917 16:12:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:19.917 rmmod nvme_tcp 00:30:19.917 rmmod nvme_fabrics 00:30:19.917 rmmod nvme_keyring 00:30:19.917 16:12:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:19.917 16:12:59 -- nvmf/common.sh@124 -- # set -e 00:30:19.917 16:12:59 -- nvmf/common.sh@125 -- # return 0 00:30:19.917 16:12:59 -- nvmf/common.sh@478 -- # '[' -n 2621180 ']' 00:30:19.917 16:12:59 -- nvmf/common.sh@479 -- # killprocess 2621180 00:30:19.917 16:12:59 -- common/autotest_common.sh@936 -- # '[' -z 2621180 ']' 00:30:19.917 16:12:59 -- common/autotest_common.sh@940 -- # kill -0 2621180 00:30:19.917 16:12:59 -- common/autotest_common.sh@941 -- # uname 00:30:19.917 16:12:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:19.917 16:12:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2621180 00:30:19.917 16:12:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:19.917 16:12:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:19.917 16:12:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2621180' 00:30:19.917 killing process with pid 2621180 00:30:19.917 16:12:59 -- common/autotest_common.sh@955 -- # kill 2621180 00:30:19.917 16:12:59 -- common/autotest_common.sh@960 -- # wait 2621180 00:30:21.292 16:13:00 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:30:21.293 16:13:00 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:23.826 Waiting for block devices as requested 00:30:23.826 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:23.826 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:23.826 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:23.826 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:23.826 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:23.826 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:23.826 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:23.826 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:24.084 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:24.084 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:24.084 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:24.342 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:24.342 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:24.342 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:24.342 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:24.600 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:24.600 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:24.600 16:13:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:24.600 16:13:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:24.600 16:13:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:24.600 16:13:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:24.600 16:13:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.600 16:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:24.600 16:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.131 16:13:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:27.131 00:30:27.131 real 1m22.212s 00:30:27.131 user 7m25.776s 00:30:27.131 sys 0m19.285s 00:30:27.131 16:13:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:27.131 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:30:27.131 ************************************ 00:30:27.131 END TEST nvmf_dif 00:30:27.131 ************************************ 00:30:27.131 16:13:06 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:27.131 16:13:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:27.131 16:13:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:27.131 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:30:27.131 ************************************ 00:30:27.131 START TEST nvmf_abort_qd_sizes 00:30:27.131 ************************************ 00:30:27.131 16:13:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:27.131 * Looking for test storage... 00:30:27.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.131 16:13:06 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.131 16:13:06 -- nvmf/common.sh@7 -- # uname -s 00:30:27.131 16:13:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.131 16:13:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.131 16:13:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.131 16:13:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.131 16:13:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.131 16:13:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.131 16:13:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.131 16:13:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.131 16:13:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.131 16:13:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.131 16:13:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:27.131 16:13:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:27.131 16:13:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.131 16:13:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.131 16:13:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.131 16:13:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.131 16:13:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.131 16:13:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.131 16:13:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.131 16:13:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.131 16:13:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.131 16:13:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.131 16:13:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.131 16:13:06 -- paths/export.sh@5 -- # export PATH 00:30:27.131 16:13:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.131 16:13:06 -- nvmf/common.sh@47 -- # : 0 00:30:27.131 16:13:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.131 16:13:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.131 16:13:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.131 16:13:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.131 16:13:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.131 16:13:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.131 16:13:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.131 16:13:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.131 16:13:06 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:27.131 16:13:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:27.131 16:13:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.131 16:13:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:27.131 16:13:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:27.131 16:13:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:27.131 16:13:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.131 16:13:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:27.131 16:13:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.131 16:13:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:27.131 16:13:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:27.131 16:13:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:27.131 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:30:32.436 16:13:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:32.436 16:13:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:32.436 16:13:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:32.436 16:13:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:32.436 16:13:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:32.436 16:13:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:32.436 16:13:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:32.436 16:13:11 -- nvmf/common.sh@295 -- # net_devs=() 00:30:32.436 16:13:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:32.436 16:13:11 -- nvmf/common.sh@296 -- # e810=() 00:30:32.436 16:13:11 -- nvmf/common.sh@296 -- # local -ga e810 00:30:32.436 16:13:11 -- nvmf/common.sh@297 -- # x722=() 00:30:32.436 16:13:11 -- nvmf/common.sh@297 -- # local -ga x722 00:30:32.436 16:13:11 -- nvmf/common.sh@298 -- # mlx=() 00:30:32.436 16:13:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:32.436 16:13:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.436 16:13:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:32.436 16:13:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:32.436 16:13:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:32.436 16:13:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.436 16:13:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:32.436 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:32.436 16:13:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:32.436 16:13:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:32.436 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:32.436 16:13:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:32.436 16:13:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:32.436 16:13:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.436 16:13:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.436 16:13:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:32.436 16:13:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.436 16:13:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:32.436 Found net devices under 0000:86:00.0: cvl_0_0 00:30:32.436 16:13:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.436 16:13:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:32.437 16:13:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.437 16:13:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:32.437 16:13:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.437 16:13:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:32.437 Found net devices under 0000:86:00.1: cvl_0_1 00:30:32.437 16:13:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.437 16:13:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:32.437 16:13:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:32.437 16:13:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:32.437 16:13:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:32.437 16:13:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:32.437 16:13:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.437 16:13:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.437 16:13:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.437 16:13:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:32.437 16:13:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.437 16:13:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.437 16:13:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:32.437 16:13:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.437 16:13:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.437 16:13:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:32.437 16:13:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:32.437 16:13:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.437 16:13:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.437 16:13:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.437 16:13:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.437 16:13:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:32.437 16:13:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.437 16:13:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.437 16:13:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.437 16:13:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:32.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:30:32.437 00:30:32.437 --- 10.0.0.2 ping statistics --- 00:30:32.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.437 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:30:32.437 16:13:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:30:32.437 00:30:32.437 --- 10.0.0.1 ping statistics --- 00:30:32.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.437 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:30:32.437 16:13:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.437 16:13:11 -- nvmf/common.sh@411 -- # return 0 00:30:32.437 16:13:11 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:30:32.437 16:13:11 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:34.972 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:34.972 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:35.982 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:35.982 16:13:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.982 16:13:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:35.982 16:13:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:35.982 16:13:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.982 16:13:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:35.982 16:13:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:35.982 16:13:15 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:35.982 16:13:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:35.982 16:13:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:35.982 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:35.982 16:13:15 -- nvmf/common.sh@470 -- # nvmfpid=2639442 00:30:35.982 16:13:15 -- nvmf/common.sh@471 -- # waitforlisten 2639442 00:30:35.982 16:13:15 -- common/autotest_common.sh@817 -- # '[' -z 2639442 ']' 00:30:35.983 16:13:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.983 16:13:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:35.983 16:13:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.983 16:13:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:35.983 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:35.983 16:13:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:35.983 [2024-04-26 16:13:15.570907] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:35.983 [2024-04-26 16:13:15.570992] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.983 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.252 [2024-04-26 16:13:15.681091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.252 [2024-04-26 16:13:15.900108] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.252 [2024-04-26 16:13:15.900153] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.252 [2024-04-26 16:13:15.900163] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.252 [2024-04-26 16:13:15.900175] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.252 [2024-04-26 16:13:15.900182] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.252 [2024-04-26 16:13:15.900248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.252 [2024-04-26 16:13:15.900326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.252 [2024-04-26 16:13:15.900383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.252 [2024-04-26 16:13:15.900393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.820 16:13:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:36.820 16:13:16 -- common/autotest_common.sh@850 -- # return 0 00:30:36.820 16:13:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:36.820 16:13:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:36.820 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:30:36.820 16:13:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.821 16:13:16 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:36.821 16:13:16 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:36.821 16:13:16 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:36.821 16:13:16 -- scripts/common.sh@309 -- # local bdf bdfs 00:30:36.821 16:13:16 -- scripts/common.sh@310 -- # local nvmes 00:30:36.821 16:13:16 -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:30:36.821 16:13:16 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:36.821 16:13:16 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:36.821 16:13:16 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:30:36.821 16:13:16 -- scripts/common.sh@320 -- # uname -s 00:30:36.821 16:13:16 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:36.821 16:13:16 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:36.821 16:13:16 -- scripts/common.sh@325 -- # (( 1 )) 00:30:36.821 16:13:16 -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:30:36.821 16:13:16 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:36.821 16:13:16 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:30:36.821 16:13:16 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:36.821 16:13:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:36.821 16:13:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:36.821 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:30:37.080 ************************************ 00:30:37.080 START TEST spdk_target_abort 00:30:37.080 ************************************ 00:30:37.080 16:13:16 -- common/autotest_common.sh@1111 -- # spdk_target 00:30:37.080 16:13:16 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:37.080 16:13:16 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:30:37.080 16:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.080 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:30:40.372 spdk_targetn1 00:30:40.372 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.372 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.372 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:40.372 [2024-04-26 16:13:19.394058] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.372 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:40.372 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.372 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:40.372 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:40.372 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.372 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:40.372 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:40.372 16:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:40.372 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:30:40.372 [2024-04-26 16:13:19.454315] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.372 16:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:40.372 16:13:19 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:40.372 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.663 Initializing NVMe Controllers 00:30:43.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:43.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:43.663 Initialization complete. Launching workers. 00:30:43.663 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5766, failed: 0 00:30:43.663 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1416, failed to submit 4350 00:30:43.663 success 892, unsuccess 524, failed 0 00:30:43.663 16:13:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:43.663 16:13:22 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:43.663 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.955 Initializing NVMe Controllers 00:30:46.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:46.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:46.955 Initialization complete. Launching workers. 00:30:46.955 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8597, failed: 0 00:30:46.955 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1266, failed to submit 7331 00:30:46.955 success 260, unsuccess 1006, failed 0 00:30:46.955 16:13:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:46.955 16:13:26 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:46.955 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.249 Initializing NVMe Controllers 00:30:50.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:50.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:50.249 Initialization complete. Launching workers. 00:30:50.249 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31541, failed: 0 00:30:50.249 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2779, failed to submit 28762 00:30:50.249 success 526, unsuccess 2253, failed 0 00:30:50.249 16:13:29 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:50.249 16:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.249 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:30:50.249 16:13:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:50.249 16:13:29 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:50.249 16:13:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:50.249 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:30:51.186 16:13:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:51.186 16:13:30 -- target/abort_qd_sizes.sh@61 -- # killprocess 2639442 00:30:51.186 16:13:30 -- common/autotest_common.sh@936 -- # '[' -z 2639442 ']' 00:30:51.186 16:13:30 -- common/autotest_common.sh@940 -- # kill -0 2639442 00:30:51.186 16:13:30 -- common/autotest_common.sh@941 -- # uname 00:30:51.186 16:13:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:51.186 16:13:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2639442 00:30:51.186 16:13:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:51.186 16:13:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:51.186 16:13:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2639442' 00:30:51.186 killing process with pid 2639442 00:30:51.186 16:13:30 -- common/autotest_common.sh@955 -- # kill 2639442 00:30:51.186 16:13:30 -- common/autotest_common.sh@960 -- # wait 2639442 00:30:52.566 00:30:52.566 real 0m15.334s 00:30:52.566 user 0m59.854s 00:30:52.566 sys 0m2.227s 00:30:52.566 16:13:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:52.566 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:30:52.566 ************************************ 00:30:52.566 END TEST spdk_target_abort 00:30:52.566 ************************************ 00:30:52.566 16:13:31 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:52.566 16:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:52.566 16:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:52.566 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:30:52.566 ************************************ 00:30:52.566 START TEST kernel_target_abort 00:30:52.566 ************************************ 00:30:52.566 16:13:32 -- common/autotest_common.sh@1111 -- # kernel_target 00:30:52.566 16:13:32 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:52.566 16:13:32 -- nvmf/common.sh@717 -- # local ip 00:30:52.566 16:13:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:52.566 16:13:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:52.566 16:13:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:52.566 16:13:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:52.566 16:13:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:52.566 16:13:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:52.566 16:13:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:52.566 16:13:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:52.566 16:13:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:52.566 16:13:32 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:52.566 16:13:32 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:52.566 16:13:32 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:52.566 16:13:32 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:52.566 16:13:32 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:52.566 16:13:32 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:52.566 16:13:32 -- nvmf/common.sh@628 -- # local block nvme 00:30:52.566 16:13:32 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:52.566 16:13:32 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:52.566 16:13:32 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:52.566 16:13:32 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:55.104 Waiting for block devices as requested 00:30:55.104 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:55.104 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:55.104 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:55.104 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:55.104 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:55.104 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:55.364 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:55.364 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:55.364 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:55.364 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:55.623 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:55.623 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:55.623 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:55.883 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:55.883 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:55.883 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:55.883 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:56.823 16:13:36 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:56.823 16:13:36 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:56.823 16:13:36 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:56.823 16:13:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:56.823 16:13:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:56.823 16:13:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:56.823 16:13:36 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:56.823 16:13:36 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:56.823 16:13:36 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:56.823 No valid GPT data, bailing 00:30:56.823 16:13:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:56.823 16:13:36 -- scripts/common.sh@391 -- # pt= 00:30:56.823 16:13:36 -- scripts/common.sh@392 -- # return 1 00:30:56.823 16:13:36 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:56.823 16:13:36 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:30:56.823 16:13:36 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:56.823 16:13:36 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:56.823 16:13:36 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:56.823 16:13:36 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:56.823 16:13:36 -- nvmf/common.sh@656 -- # echo 1 00:30:56.823 16:13:36 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:30:56.823 16:13:36 -- nvmf/common.sh@658 -- # echo 1 00:30:56.823 16:13:36 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:56.823 16:13:36 -- nvmf/common.sh@661 -- # echo tcp 00:30:56.823 16:13:36 -- nvmf/common.sh@662 -- # echo 4420 00:30:56.823 16:13:36 -- nvmf/common.sh@663 -- # echo ipv4 00:30:56.823 16:13:36 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:56.823 16:13:36 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:30:56.823 00:30:56.823 Discovery Log Number of Records 2, Generation counter 2 00:30:56.823 =====Discovery Log Entry 0====== 00:30:56.823 trtype: tcp 00:30:56.823 adrfam: ipv4 00:30:56.823 subtype: current discovery subsystem 00:30:56.823 treq: not specified, sq flow control disable supported 00:30:56.823 portid: 1 00:30:56.823 trsvcid: 4420 00:30:56.823 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:56.823 traddr: 10.0.0.1 00:30:56.823 eflags: none 00:30:56.823 sectype: none 00:30:56.823 =====Discovery Log Entry 1====== 00:30:56.823 trtype: tcp 00:30:56.823 adrfam: ipv4 00:30:56.823 subtype: nvme subsystem 00:30:56.823 treq: not specified, sq flow control disable supported 00:30:56.823 portid: 1 00:30:56.823 trsvcid: 4420 00:30:56.823 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:56.823 traddr: 10.0.0.1 00:30:56.823 eflags: none 00:30:56.823 sectype: none 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:56.823 16:13:36 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:56.823 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.145 Initializing NVMe Controllers 00:31:00.145 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:00.145 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:00.145 Initialization complete. Launching workers. 00:31:00.145 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36837, failed: 0 00:31:00.145 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36837, failed to submit 0 00:31:00.145 success 0, unsuccess 36837, failed 0 00:31:00.145 16:13:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:00.145 16:13:39 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:00.145 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.436 Initializing NVMe Controllers 00:31:03.436 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:03.436 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:03.436 Initialization complete. Launching workers. 00:31:03.436 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73566, failed: 0 00:31:03.436 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18578, failed to submit 54988 00:31:03.436 success 0, unsuccess 18578, failed 0 00:31:03.436 16:13:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:03.436 16:13:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:03.436 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.726 Initializing NVMe Controllers 00:31:06.726 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:06.726 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:06.726 Initialization complete. Launching workers. 00:31:06.726 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72373, failed: 0 00:31:06.726 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18090, failed to submit 54283 00:31:06.726 success 0, unsuccess 18090, failed 0 00:31:06.726 16:13:45 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:06.726 16:13:45 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:06.726 16:13:45 -- nvmf/common.sh@675 -- # echo 0 00:31:06.726 16:13:45 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:06.726 16:13:45 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:06.726 16:13:45 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:06.726 16:13:45 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:06.726 16:13:45 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:31:06.726 16:13:45 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:31:06.726 16:13:45 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:09.261 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:09.261 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:09.829 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:09.829 00:31:09.829 real 0m17.478s 00:31:09.829 user 0m5.639s 00:31:09.829 sys 0m6.042s 00:31:09.829 16:13:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:09.829 16:13:49 -- common/autotest_common.sh@10 -- # set +x 00:31:09.829 ************************************ 00:31:09.829 END TEST kernel_target_abort 00:31:09.829 ************************************ 00:31:10.089 16:13:49 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:10.089 16:13:49 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:10.089 16:13:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:10.089 16:13:49 -- nvmf/common.sh@117 -- # sync 00:31:10.089 16:13:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:10.089 16:13:49 -- nvmf/common.sh@120 -- # set +e 00:31:10.089 16:13:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:10.089 16:13:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:10.089 rmmod nvme_tcp 00:31:10.089 rmmod nvme_fabrics 00:31:10.089 rmmod nvme_keyring 00:31:10.089 16:13:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:10.089 16:13:49 -- nvmf/common.sh@124 -- # set -e 00:31:10.089 16:13:49 -- nvmf/common.sh@125 -- # return 0 00:31:10.089 16:13:49 -- nvmf/common.sh@478 -- # '[' -n 2639442 ']' 00:31:10.089 16:13:49 -- nvmf/common.sh@479 -- # killprocess 2639442 00:31:10.089 16:13:49 -- common/autotest_common.sh@936 -- # '[' -z 2639442 ']' 00:31:10.089 16:13:49 -- common/autotest_common.sh@940 -- # kill -0 2639442 00:31:10.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2639442) - No such process 00:31:10.089 16:13:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2639442 is not found' 00:31:10.089 Process with pid 2639442 is not found 00:31:10.089 16:13:49 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:31:10.089 16:13:49 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:12.626 Waiting for block devices as requested 00:31:12.626 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:12.626 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:12.884 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:12.884 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:12.884 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:13.143 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:13.143 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:13.143 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:13.143 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:13.403 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:13.403 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:13.403 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:13.403 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:13.661 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:13.661 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:13.661 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:13.661 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:13.920 16:13:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:13.920 16:13:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:13.920 16:13:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:13.920 16:13:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:13.921 16:13:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.921 16:13:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:13.921 16:13:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.827 16:13:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:15.827 00:31:15.827 real 0m49.014s 00:31:15.827 user 1m9.439s 00:31:15.827 sys 0m16.376s 00:31:15.827 16:13:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:15.827 16:13:55 -- common/autotest_common.sh@10 -- # set +x 00:31:15.827 ************************************ 00:31:15.827 END TEST nvmf_abort_qd_sizes 00:31:15.827 ************************************ 00:31:16.102 16:13:55 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:16.102 16:13:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:16.102 16:13:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:16.102 16:13:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.102 ************************************ 00:31:16.102 START TEST keyring_file 00:31:16.102 ************************************ 00:31:16.102 16:13:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:16.102 * Looking for test storage... 00:31:16.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:16.102 16:13:55 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:16.102 16:13:55 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.102 16:13:55 -- nvmf/common.sh@7 -- # uname -s 00:31:16.102 16:13:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.102 16:13:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.102 16:13:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.102 16:13:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.102 16:13:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.102 16:13:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.102 16:13:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.102 16:13:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.102 16:13:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.102 16:13:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.102 16:13:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.102 16:13:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.102 16:13:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.102 16:13:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.102 16:13:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.102 16:13:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.102 16:13:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.102 16:13:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.102 16:13:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.102 16:13:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.102 16:13:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.102 16:13:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.102 16:13:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.102 16:13:55 -- paths/export.sh@5 -- # export PATH 00:31:16.102 16:13:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.102 16:13:55 -- nvmf/common.sh@47 -- # : 0 00:31:16.102 16:13:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:16.441 16:13:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:16.441 16:13:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.441 16:13:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.441 16:13:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.441 16:13:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:16.441 16:13:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:16.441 16:13:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:16.441 16:13:55 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:16.441 16:13:55 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:16.441 16:13:55 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:16.441 16:13:55 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:16.441 16:13:55 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:16.441 16:13:55 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:16.441 16:13:55 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:16.441 16:13:55 -- keyring/common.sh@15 -- # local name key digest path 00:31:16.441 16:13:55 -- keyring/common.sh@17 -- # name=key0 00:31:16.441 16:13:55 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:16.441 16:13:55 -- keyring/common.sh@17 -- # digest=0 00:31:16.441 16:13:55 -- keyring/common.sh@18 -- # mktemp 00:31:16.441 16:13:55 -- keyring/common.sh@18 -- # path=/tmp/tmp.Yi1CAc91oh 00:31:16.441 16:13:55 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:16.441 16:13:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:16.441 16:13:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:16.441 16:13:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:16.441 16:13:55 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:31:16.441 16:13:55 -- nvmf/common.sh@693 -- # digest=0 00:31:16.441 16:13:55 -- nvmf/common.sh@694 -- # python - 00:31:16.441 16:13:55 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Yi1CAc91oh 00:31:16.441 16:13:55 -- keyring/common.sh@23 -- # echo /tmp/tmp.Yi1CAc91oh 00:31:16.441 16:13:55 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Yi1CAc91oh 00:31:16.441 16:13:55 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:16.441 16:13:55 -- keyring/common.sh@15 -- # local name key digest path 00:31:16.441 16:13:55 -- keyring/common.sh@17 -- # name=key1 00:31:16.441 16:13:55 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:16.441 16:13:55 -- keyring/common.sh@17 -- # digest=0 00:31:16.441 16:13:55 -- keyring/common.sh@18 -- # mktemp 00:31:16.441 16:13:55 -- keyring/common.sh@18 -- # path=/tmp/tmp.Gvwb4qp0Mi 00:31:16.441 16:13:55 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:16.441 16:13:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:16.441 16:13:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:16.441 16:13:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:16.441 16:13:55 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:31:16.441 16:13:55 -- nvmf/common.sh@693 -- # digest=0 00:31:16.441 16:13:55 -- nvmf/common.sh@694 -- # python - 00:31:16.441 16:13:55 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Gvwb4qp0Mi 00:31:16.441 16:13:55 -- keyring/common.sh@23 -- # echo /tmp/tmp.Gvwb4qp0Mi 00:31:16.441 16:13:55 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Gvwb4qp0Mi 00:31:16.441 16:13:55 -- keyring/file.sh@30 -- # tgtpid=2648684 00:31:16.441 16:13:55 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:16.441 16:13:55 -- keyring/file.sh@32 -- # waitforlisten 2648684 00:31:16.441 16:13:55 -- common/autotest_common.sh@817 -- # '[' -z 2648684 ']' 00:31:16.441 16:13:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.441 16:13:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:16.441 16:13:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.441 16:13:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:16.441 16:13:55 -- common/autotest_common.sh@10 -- # set +x 00:31:16.441 [2024-04-26 16:13:55.968833] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:16.441 [2024-04-26 16:13:55.968918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648684 ] 00:31:16.441 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.441 [2024-04-26 16:13:56.067930] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.721 [2024-04-26 16:13:56.277605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.660 16:13:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:17.660 16:13:57 -- common/autotest_common.sh@850 -- # return 0 00:31:17.660 16:13:57 -- keyring/file.sh@33 -- # rpc_cmd 00:31:17.660 16:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.660 16:13:57 -- common/autotest_common.sh@10 -- # set +x 00:31:17.660 [2024-04-26 16:13:57.212706] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.660 null0 00:31:17.660 [2024-04-26 16:13:57.244772] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:17.660 [2024-04-26 16:13:57.245160] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:17.660 [2024-04-26 16:13:57.252826] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:17.660 16:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.660 16:13:57 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:17.660 16:13:57 -- common/autotest_common.sh@638 -- # local es=0 00:31:17.660 16:13:57 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:17.660 16:13:57 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:17.660 16:13:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:17.660 16:13:57 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:17.660 16:13:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:17.660 16:13:57 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:17.660 16:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.660 16:13:57 -- common/autotest_common.sh@10 -- # set +x 00:31:17.660 [2024-04-26 16:13:57.260786] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:31:17.660 { 00:31:17.660 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.660 "secure_channel": false, 00:31:17.660 "listen_address": { 00:31:17.660 "trtype": "tcp", 00:31:17.660 "traddr": "127.0.0.1", 00:31:17.660 "trsvcid": "4420" 00:31:17.660 }, 00:31:17.660 "method": "nvmf_subsystem_add_listener", 00:31:17.660 "req_id": 1 00:31:17.660 } 00:31:17.660 Got JSON-RPC error response 00:31:17.660 response: 00:31:17.660 { 00:31:17.660 "code": -32602, 00:31:17.660 "message": "Invalid parameters" 00:31:17.660 } 00:31:17.660 16:13:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:17.660 16:13:57 -- common/autotest_common.sh@641 -- # es=1 00:31:17.660 16:13:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:17.660 16:13:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:17.660 16:13:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:17.660 16:13:57 -- keyring/file.sh@46 -- # bperfpid=2648924 00:31:17.660 16:13:57 -- keyring/file.sh@48 -- # waitforlisten 2648924 /var/tmp/bperf.sock 00:31:17.660 16:13:57 -- common/autotest_common.sh@817 -- # '[' -z 2648924 ']' 00:31:17.660 16:13:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:17.660 16:13:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:17.660 16:13:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:17.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:17.660 16:13:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:17.660 16:13:57 -- common/autotest_common.sh@10 -- # set +x 00:31:17.660 16:13:57 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:17.660 [2024-04-26 16:13:57.335110] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:17.660 [2024-04-26 16:13:57.335196] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648924 ] 00:31:17.920 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.920 [2024-04-26 16:13:57.438777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.179 [2024-04-26 16:13:57.661557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.439 16:13:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:18.439 16:13:58 -- common/autotest_common.sh@850 -- # return 0 00:31:18.439 16:13:58 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Yi1CAc91oh 00:31:18.439 16:13:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Yi1CAc91oh 00:31:18.698 16:13:58 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Gvwb4qp0Mi 00:31:18.698 16:13:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Gvwb4qp0Mi 00:31:18.958 16:13:58 -- keyring/file.sh@51 -- # get_key key0 00:31:18.958 16:13:58 -- keyring/file.sh@51 -- # jq -r .path 00:31:18.958 16:13:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.958 16:13:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:18.958 16:13:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:18.958 16:13:58 -- keyring/file.sh@51 -- # [[ /tmp/tmp.Yi1CAc91oh == \/\t\m\p\/\t\m\p\.\Y\i\1\C\A\c\9\1\o\h ]] 00:31:18.958 16:13:58 -- keyring/file.sh@52 -- # get_key key1 00:31:18.958 16:13:58 -- keyring/file.sh@52 -- # jq -r .path 00:31:18.958 16:13:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.958 16:13:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:18.958 16:13:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.217 16:13:58 -- keyring/file.sh@52 -- # [[ /tmp/tmp.Gvwb4qp0Mi == \/\t\m\p\/\t\m\p\.\G\v\w\b\4\q\p\0\M\i ]] 00:31:19.217 16:13:58 -- keyring/file.sh@53 -- # get_refcnt key0 00:31:19.217 16:13:58 -- keyring/common.sh@12 -- # get_key key0 00:31:19.217 16:13:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.217 16:13:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.217 16:13:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:19.217 16:13:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.477 16:13:58 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:19.477 16:13:58 -- keyring/file.sh@54 -- # get_refcnt key1 00:31:19.477 16:13:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.477 16:13:58 -- keyring/common.sh@12 -- # get_key key1 00:31:19.477 16:13:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.477 16:13:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.477 16:13:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:19.477 16:13:59 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:19.477 16:13:59 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:19.477 16:13:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:19.736 [2024-04-26 16:13:59.262359] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:19.736 nvme0n1 00:31:19.736 16:13:59 -- keyring/file.sh@59 -- # get_refcnt key0 00:31:19.736 16:13:59 -- keyring/common.sh@12 -- # get_key key0 00:31:19.736 16:13:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.736 16:13:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.736 16:13:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.736 16:13:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:19.995 16:13:59 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:19.995 16:13:59 -- keyring/file.sh@60 -- # get_refcnt key1 00:31:19.995 16:13:59 -- keyring/common.sh@12 -- # get_key key1 00:31:19.995 16:13:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.995 16:13:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.995 16:13:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:19.995 16:13:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:20.254 16:13:59 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:20.254 16:13:59 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:20.254 Running I/O for 1 seconds... 00:31:21.191 00:31:21.191 Latency(us) 00:31:21.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.191 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:21.191 nvme0n1 : 1.01 5872.33 22.94 0.00 0.00 21662.25 8833.11 37384.01 00:31:21.191 =================================================================================================================== 00:31:21.191 Total : 5872.33 22.94 0.00 0.00 21662.25 8833.11 37384.01 00:31:21.191 0 00:31:21.191 16:14:00 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:21.191 16:14:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:21.450 16:14:01 -- keyring/file.sh@65 -- # get_refcnt key0 00:31:21.450 16:14:01 -- keyring/common.sh@12 -- # get_key key0 00:31:21.450 16:14:01 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:21.450 16:14:01 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.450 16:14:01 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:21.450 16:14:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.709 16:14:01 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:21.709 16:14:01 -- keyring/file.sh@66 -- # get_refcnt key1 00:31:21.709 16:14:01 -- keyring/common.sh@12 -- # get_key key1 00:31:21.709 16:14:01 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:21.709 16:14:01 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.709 16:14:01 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:21.709 16:14:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.969 16:14:01 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:21.969 16:14:01 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:21.969 16:14:01 -- common/autotest_common.sh@638 -- # local es=0 00:31:21.969 16:14:01 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:21.969 16:14:01 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:21.969 16:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:21.969 16:14:01 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:21.969 16:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:21.969 16:14:01 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:21.969 16:14:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:21.969 [2024-04-26 16:14:01.564778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:21.969 [2024-04-26 16:14:01.565141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (107): Transport endpoint is not connected 00:31:21.969 [2024-04-26 16:14:01.566124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (9): Bad file descriptor 00:31:21.969 [2024-04-26 16:14:01.567120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:21.969 [2024-04-26 16:14:01.567137] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:21.969 [2024-04-26 16:14:01.567147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:21.969 request: 00:31:21.969 { 00:31:21.969 "name": "nvme0", 00:31:21.969 "trtype": "tcp", 00:31:21.969 "traddr": "127.0.0.1", 00:31:21.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:21.969 "adrfam": "ipv4", 00:31:21.969 "trsvcid": "4420", 00:31:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:21.969 "psk": "key1", 00:31:21.969 "method": "bdev_nvme_attach_controller", 00:31:21.969 "req_id": 1 00:31:21.969 } 00:31:21.969 Got JSON-RPC error response 00:31:21.969 response: 00:31:21.969 { 00:31:21.969 "code": -32602, 00:31:21.969 "message": "Invalid parameters" 00:31:21.969 } 00:31:21.969 16:14:01 -- common/autotest_common.sh@641 -- # es=1 00:31:21.969 16:14:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:21.969 16:14:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:21.969 16:14:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:21.969 16:14:01 -- keyring/file.sh@71 -- # get_refcnt key0 00:31:21.969 16:14:01 -- keyring/common.sh@12 -- # get_key key0 00:31:21.969 16:14:01 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:21.969 16:14:01 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.969 16:14:01 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:21.969 16:14:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.229 16:14:01 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:22.229 16:14:01 -- keyring/file.sh@72 -- # get_refcnt key1 00:31:22.229 16:14:01 -- keyring/common.sh@12 -- # get_key key1 00:31:22.229 16:14:01 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:22.229 16:14:01 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:22.229 16:14:01 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.229 16:14:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.489 16:14:01 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:22.489 16:14:01 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:22.489 16:14:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:22.489 16:14:02 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:22.489 16:14:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:22.748 16:14:02 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:22.748 16:14:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.748 16:14:02 -- keyring/file.sh@77 -- # jq length 00:31:23.008 16:14:02 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:23.008 16:14:02 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Yi1CAc91oh 00:31:23.008 16:14:02 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Yi1CAc91oh 00:31:23.008 16:14:02 -- common/autotest_common.sh@638 -- # local es=0 00:31:23.008 16:14:02 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Yi1CAc91oh 00:31:23.008 16:14:02 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:23.008 16:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:23.008 16:14:02 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:23.008 16:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:23.008 16:14:02 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Yi1CAc91oh 00:31:23.008 16:14:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Yi1CAc91oh 00:31:23.008 [2024-04-26 16:14:02.615478] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Yi1CAc91oh': 0100660 00:31:23.008 [2024-04-26 16:14:02.615515] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:23.008 request: 00:31:23.008 { 00:31:23.008 "name": "key0", 00:31:23.008 "path": "/tmp/tmp.Yi1CAc91oh", 00:31:23.008 "method": "keyring_file_add_key", 00:31:23.008 "req_id": 1 00:31:23.008 } 00:31:23.008 Got JSON-RPC error response 00:31:23.008 response: 00:31:23.008 { 00:31:23.008 "code": -1, 00:31:23.008 "message": "Operation not permitted" 00:31:23.008 } 00:31:23.008 16:14:02 -- common/autotest_common.sh@641 -- # es=1 00:31:23.008 16:14:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:23.008 16:14:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:23.008 16:14:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:23.008 16:14:02 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Yi1CAc91oh 00:31:23.008 16:14:02 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Yi1CAc91oh 00:31:23.008 16:14:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Yi1CAc91oh 00:31:23.268 16:14:02 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Yi1CAc91oh 00:31:23.268 16:14:02 -- keyring/file.sh@88 -- # get_refcnt key0 00:31:23.268 16:14:02 -- keyring/common.sh@12 -- # get_key key0 00:31:23.268 16:14:02 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:23.268 16:14:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:23.268 16:14:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:23.268 16:14:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:23.527 16:14:02 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:23.527 16:14:02 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.527 16:14:02 -- common/autotest_common.sh@638 -- # local es=0 00:31:23.527 16:14:02 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.527 16:14:02 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:23.527 16:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:23.527 16:14:02 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:23.527 16:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:23.527 16:14:02 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.527 16:14:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.527 [2024-04-26 16:14:03.132887] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Yi1CAc91oh': No such file or directory 00:31:23.527 [2024-04-26 16:14:03.132920] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:23.527 [2024-04-26 16:14:03.132944] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:23.527 [2024-04-26 16:14:03.132954] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:23.527 [2024-04-26 16:14:03.132964] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:23.527 request: 00:31:23.527 { 00:31:23.527 "name": "nvme0", 00:31:23.527 "trtype": "tcp", 00:31:23.527 "traddr": "127.0.0.1", 00:31:23.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.527 "adrfam": "ipv4", 00:31:23.527 "trsvcid": "4420", 00:31:23.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.527 "psk": "key0", 00:31:23.527 "method": "bdev_nvme_attach_controller", 00:31:23.527 "req_id": 1 00:31:23.527 } 00:31:23.527 Got JSON-RPC error response 00:31:23.527 response: 00:31:23.527 { 00:31:23.527 "code": -19, 00:31:23.527 "message": "No such device" 00:31:23.527 } 00:31:23.527 16:14:03 -- common/autotest_common.sh@641 -- # es=1 00:31:23.527 16:14:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:23.527 16:14:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:23.527 16:14:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:23.527 16:14:03 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:23.527 16:14:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:23.795 16:14:03 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:23.795 16:14:03 -- keyring/common.sh@15 -- # local name key digest path 00:31:23.795 16:14:03 -- keyring/common.sh@17 -- # name=key0 00:31:23.795 16:14:03 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:23.795 16:14:03 -- keyring/common.sh@17 -- # digest=0 00:31:23.795 16:14:03 -- keyring/common.sh@18 -- # mktemp 00:31:23.795 16:14:03 -- keyring/common.sh@18 -- # path=/tmp/tmp.X58vfgNjQU 00:31:23.795 16:14:03 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:23.795 16:14:03 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:23.795 16:14:03 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:23.795 16:14:03 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:23.795 16:14:03 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:31:23.795 16:14:03 -- nvmf/common.sh@693 -- # digest=0 00:31:23.795 16:14:03 -- nvmf/common.sh@694 -- # python - 00:31:23.795 16:14:03 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.X58vfgNjQU 00:31:23.795 16:14:03 -- keyring/common.sh@23 -- # echo /tmp/tmp.X58vfgNjQU 00:31:23.795 16:14:03 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.X58vfgNjQU 00:31:23.795 16:14:03 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.X58vfgNjQU 00:31:23.795 16:14:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.X58vfgNjQU 00:31:24.054 16:14:03 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:24.054 16:14:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:24.313 nvme0n1 00:31:24.313 16:14:03 -- keyring/file.sh@99 -- # get_refcnt key0 00:31:24.313 16:14:03 -- keyring/common.sh@12 -- # get_key key0 00:31:24.313 16:14:03 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:24.313 16:14:03 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:24.313 16:14:03 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.313 16:14:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.313 16:14:03 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:24.313 16:14:03 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:24.313 16:14:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:24.573 16:14:04 -- keyring/file.sh@101 -- # get_key key0 00:31:24.573 16:14:04 -- keyring/file.sh@101 -- # jq -r .removed 00:31:24.573 16:14:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.573 16:14:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:24.573 16:14:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.832 16:14:04 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:24.832 16:14:04 -- keyring/file.sh@102 -- # get_refcnt key0 00:31:24.832 16:14:04 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:24.832 16:14:04 -- keyring/common.sh@12 -- # get_key key0 00:31:24.832 16:14:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.832 16:14:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:24.832 16:14:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.832 16:14:04 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:24.832 16:14:04 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:24.832 16:14:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:25.092 16:14:04 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:25.092 16:14:04 -- keyring/file.sh@104 -- # jq length 00:31:25.092 16:14:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.352 16:14:04 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:25.352 16:14:04 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.X58vfgNjQU 00:31:25.352 16:14:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.X58vfgNjQU 00:31:25.352 16:14:04 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Gvwb4qp0Mi 00:31:25.352 16:14:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Gvwb4qp0Mi 00:31:25.612 16:14:05 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:25.612 16:14:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:25.871 nvme0n1 00:31:25.871 16:14:05 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:25.871 16:14:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:26.131 16:14:05 -- keyring/file.sh@112 -- # config='{ 00:31:26.131 "subsystems": [ 00:31:26.131 { 00:31:26.131 "subsystem": "keyring", 00:31:26.131 "config": [ 00:31:26.131 { 00:31:26.131 "method": "keyring_file_add_key", 00:31:26.131 "params": { 00:31:26.131 "name": "key0", 00:31:26.131 "path": "/tmp/tmp.X58vfgNjQU" 00:31:26.131 } 00:31:26.131 }, 00:31:26.131 { 00:31:26.131 "method": "keyring_file_add_key", 00:31:26.131 "params": { 00:31:26.131 "name": "key1", 00:31:26.131 "path": "/tmp/tmp.Gvwb4qp0Mi" 00:31:26.131 } 00:31:26.131 } 00:31:26.131 ] 00:31:26.131 }, 00:31:26.131 { 00:31:26.131 "subsystem": "iobuf", 00:31:26.131 "config": [ 00:31:26.131 { 00:31:26.131 "method": "iobuf_set_options", 00:31:26.131 "params": { 00:31:26.131 "small_pool_count": 8192, 00:31:26.131 "large_pool_count": 1024, 00:31:26.131 "small_bufsize": 8192, 00:31:26.131 "large_bufsize": 135168 00:31:26.131 } 00:31:26.131 } 00:31:26.131 ] 00:31:26.131 }, 00:31:26.131 { 00:31:26.131 "subsystem": "sock", 00:31:26.131 "config": [ 00:31:26.131 { 00:31:26.131 "method": "sock_impl_set_options", 00:31:26.131 "params": { 00:31:26.131 "impl_name": "posix", 00:31:26.131 "recv_buf_size": 2097152, 00:31:26.131 "send_buf_size": 2097152, 00:31:26.131 "enable_recv_pipe": true, 00:31:26.131 "enable_quickack": false, 00:31:26.131 "enable_placement_id": 0, 00:31:26.131 "enable_zerocopy_send_server": true, 00:31:26.132 "enable_zerocopy_send_client": false, 00:31:26.132 "zerocopy_threshold": 0, 00:31:26.132 "tls_version": 0, 00:31:26.132 "enable_ktls": false 00:31:26.132 } 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "method": "sock_impl_set_options", 00:31:26.132 "params": { 00:31:26.132 "impl_name": "ssl", 00:31:26.132 "recv_buf_size": 4096, 00:31:26.132 "send_buf_size": 4096, 00:31:26.132 "enable_recv_pipe": true, 00:31:26.132 "enable_quickack": false, 00:31:26.132 "enable_placement_id": 0, 00:31:26.132 "enable_zerocopy_send_server": true, 00:31:26.132 "enable_zerocopy_send_client": false, 00:31:26.132 "zerocopy_threshold": 0, 00:31:26.132 "tls_version": 0, 00:31:26.132 "enable_ktls": false 00:31:26.132 } 00:31:26.132 } 00:31:26.132 ] 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "subsystem": "vmd", 00:31:26.132 "config": [] 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "subsystem": "accel", 00:31:26.132 "config": [ 00:31:26.132 { 00:31:26.132 "method": "accel_set_options", 00:31:26.132 "params": { 00:31:26.132 "small_cache_size": 128, 00:31:26.132 "large_cache_size": 16, 00:31:26.132 "task_count": 2048, 00:31:26.132 "sequence_count": 2048, 00:31:26.132 "buf_count": 2048 00:31:26.132 } 00:31:26.132 } 00:31:26.132 ] 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "subsystem": "bdev", 00:31:26.132 "config": [ 00:31:26.132 { 00:31:26.132 "method": "bdev_set_options", 00:31:26.132 "params": { 00:31:26.132 "bdev_io_pool_size": 65535, 00:31:26.132 "bdev_io_cache_size": 256, 00:31:26.132 "bdev_auto_examine": true, 00:31:26.132 "iobuf_small_cache_size": 128, 00:31:26.132 "iobuf_large_cache_size": 16 00:31:26.132 } 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "method": "bdev_raid_set_options", 00:31:26.132 "params": { 00:31:26.132 "process_window_size_kb": 1024 00:31:26.132 } 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "method": "bdev_iscsi_set_options", 00:31:26.132 "params": { 00:31:26.132 "timeout_sec": 30 00:31:26.132 } 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "method": "bdev_nvme_set_options", 00:31:26.132 "params": { 00:31:26.132 "action_on_timeout": "none", 00:31:26.132 "timeout_us": 0, 00:31:26.132 "timeout_admin_us": 0, 00:31:26.132 "keep_alive_timeout_ms": 10000, 00:31:26.132 "arbitration_burst": 0, 00:31:26.132 "low_priority_weight": 0, 00:31:26.132 "medium_priority_weight": 0, 00:31:26.132 "high_priority_weight": 0, 00:31:26.132 "nvme_adminq_poll_period_us": 10000, 00:31:26.132 "nvme_ioq_poll_period_us": 0, 00:31:26.132 "io_queue_requests": 512, 00:31:26.132 "delay_cmd_submit": true, 00:31:26.132 "transport_retry_count": 4, 00:31:26.132 "bdev_retry_count": 3, 00:31:26.132 "transport_ack_timeout": 0, 00:31:26.132 "ctrlr_loss_timeout_sec": 0, 00:31:26.132 "reconnect_delay_sec": 0, 00:31:26.132 "fast_io_fail_timeout_sec": 0, 00:31:26.132 "disable_auto_failback": false, 00:31:26.132 "generate_uuids": false, 00:31:26.132 "transport_tos": 0, 00:31:26.132 "nvme_error_stat": false, 00:31:26.132 "rdma_srq_size": 0, 00:31:26.132 "io_path_stat": false, 00:31:26.132 "allow_accel_sequence": false, 00:31:26.132 "rdma_max_cq_size": 0, 00:31:26.132 "rdma_cm_event_timeout_ms": 0, 00:31:26.132 "dhchap_digests": [ 00:31:26.132 "sha256", 00:31:26.132 "sha384", 00:31:26.132 "sha512" 00:31:26.132 ], 00:31:26.132 "dhchap_dhgroups": [ 00:31:26.132 "null", 00:31:26.132 "ffdhe2048", 00:31:26.132 "ffdhe3072", 00:31:26.132 "ffdhe4096", 00:31:26.132 "ffdhe6144", 00:31:26.132 "ffdhe8192" 00:31:26.132 ] 00:31:26.132 } 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "method": "bdev_nvme_attach_controller", 00:31:26.132 "params": { 00:31:26.132 "name": "nvme0", 00:31:26.132 "trtype": "TCP", 00:31:26.132 "adrfam": "IPv4", 00:31:26.132 "traddr": "127.0.0.1", 00:31:26.132 "trsvcid": "4420", 00:31:26.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.132 "prchk_reftag": false, 00:31:26.132 "prchk_guard": false, 00:31:26.132 "ctrlr_loss_timeout_sec": 0, 00:31:26.132 "reconnect_delay_sec": 0, 00:31:26.132 "fast_io_fail_timeout_sec": 0, 00:31:26.132 "psk": "key0", 00:31:26.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:26.132 "hdgst": false, 00:31:26.132 "ddgst": false 00:31:26.132 } 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "method": "bdev_nvme_set_hotplug", 00:31:26.132 "params": { 00:31:26.132 "period_us": 100000, 00:31:26.132 "enable": false 00:31:26.132 } 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "method": "bdev_wait_for_examine" 00:31:26.132 } 00:31:26.132 ] 00:31:26.132 }, 00:31:26.132 { 00:31:26.132 "subsystem": "nbd", 00:31:26.132 "config": [] 00:31:26.132 } 00:31:26.132 ] 00:31:26.132 }' 00:31:26.132 16:14:05 -- keyring/file.sh@114 -- # killprocess 2648924 00:31:26.132 16:14:05 -- common/autotest_common.sh@936 -- # '[' -z 2648924 ']' 00:31:26.132 16:14:05 -- common/autotest_common.sh@940 -- # kill -0 2648924 00:31:26.132 16:14:05 -- common/autotest_common.sh@941 -- # uname 00:31:26.132 16:14:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:26.132 16:14:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2648924 00:31:26.133 16:14:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:26.133 16:14:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:26.133 16:14:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2648924' 00:31:26.133 killing process with pid 2648924 00:31:26.133 16:14:05 -- common/autotest_common.sh@955 -- # kill 2648924 00:31:26.133 Received shutdown signal, test time was about 1.000000 seconds 00:31:26.133 00:31:26.133 Latency(us) 00:31:26.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.133 =================================================================================================================== 00:31:26.133 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:26.133 16:14:05 -- common/autotest_common.sh@960 -- # wait 2648924 00:31:27.073 16:14:06 -- keyring/file.sh@117 -- # bperfpid=2650530 00:31:27.073 16:14:06 -- keyring/file.sh@119 -- # waitforlisten 2650530 /var/tmp/bperf.sock 00:31:27.073 16:14:06 -- common/autotest_common.sh@817 -- # '[' -z 2650530 ']' 00:31:27.073 16:14:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:27.073 16:14:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:27.073 16:14:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:27.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:27.073 16:14:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:27.073 16:14:06 -- common/autotest_common.sh@10 -- # set +x 00:31:27.073 16:14:06 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:27.073 16:14:06 -- keyring/file.sh@115 -- # echo '{ 00:31:27.073 "subsystems": [ 00:31:27.073 { 00:31:27.073 "subsystem": "keyring", 00:31:27.073 "config": [ 00:31:27.073 { 00:31:27.073 "method": "keyring_file_add_key", 00:31:27.073 "params": { 00:31:27.073 "name": "key0", 00:31:27.073 "path": "/tmp/tmp.X58vfgNjQU" 00:31:27.073 } 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "method": "keyring_file_add_key", 00:31:27.073 "params": { 00:31:27.073 "name": "key1", 00:31:27.073 "path": "/tmp/tmp.Gvwb4qp0Mi" 00:31:27.073 } 00:31:27.073 } 00:31:27.073 ] 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "subsystem": "iobuf", 00:31:27.073 "config": [ 00:31:27.073 { 00:31:27.073 "method": "iobuf_set_options", 00:31:27.073 "params": { 00:31:27.073 "small_pool_count": 8192, 00:31:27.073 "large_pool_count": 1024, 00:31:27.073 "small_bufsize": 8192, 00:31:27.073 "large_bufsize": 135168 00:31:27.073 } 00:31:27.073 } 00:31:27.073 ] 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "subsystem": "sock", 00:31:27.073 "config": [ 00:31:27.073 { 00:31:27.073 "method": "sock_impl_set_options", 00:31:27.073 "params": { 00:31:27.073 "impl_name": "posix", 00:31:27.073 "recv_buf_size": 2097152, 00:31:27.073 "send_buf_size": 2097152, 00:31:27.073 "enable_recv_pipe": true, 00:31:27.073 "enable_quickack": false, 00:31:27.073 "enable_placement_id": 0, 00:31:27.073 "enable_zerocopy_send_server": true, 00:31:27.073 "enable_zerocopy_send_client": false, 00:31:27.073 "zerocopy_threshold": 0, 00:31:27.073 "tls_version": 0, 00:31:27.073 "enable_ktls": false 00:31:27.073 } 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "method": "sock_impl_set_options", 00:31:27.073 "params": { 00:31:27.073 "impl_name": "ssl", 00:31:27.073 "recv_buf_size": 4096, 00:31:27.073 "send_buf_size": 4096, 00:31:27.073 "enable_recv_pipe": true, 00:31:27.073 "enable_quickack": false, 00:31:27.073 "enable_placement_id": 0, 00:31:27.073 "enable_zerocopy_send_server": true, 00:31:27.073 "enable_zerocopy_send_client": false, 00:31:27.073 "zerocopy_threshold": 0, 00:31:27.073 "tls_version": 0, 00:31:27.073 "enable_ktls": false 00:31:27.073 } 00:31:27.073 } 00:31:27.073 ] 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "subsystem": "vmd", 00:31:27.073 "config": [] 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "subsystem": "accel", 00:31:27.073 "config": [ 00:31:27.073 { 00:31:27.073 "method": "accel_set_options", 00:31:27.073 "params": { 00:31:27.073 "small_cache_size": 128, 00:31:27.073 "large_cache_size": 16, 00:31:27.073 "task_count": 2048, 00:31:27.073 "sequence_count": 2048, 00:31:27.073 "buf_count": 2048 00:31:27.073 } 00:31:27.073 } 00:31:27.073 ] 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "subsystem": "bdev", 00:31:27.073 "config": [ 00:31:27.073 { 00:31:27.073 "method": "bdev_set_options", 00:31:27.073 "params": { 00:31:27.073 "bdev_io_pool_size": 65535, 00:31:27.073 "bdev_io_cache_size": 256, 00:31:27.073 "bdev_auto_examine": true, 00:31:27.073 "iobuf_small_cache_size": 128, 00:31:27.073 "iobuf_large_cache_size": 16 00:31:27.073 } 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "method": "bdev_raid_set_options", 00:31:27.073 "params": { 00:31:27.073 "process_window_size_kb": 1024 00:31:27.073 } 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "method": "bdev_iscsi_set_options", 00:31:27.073 "params": { 00:31:27.073 "timeout_sec": 30 00:31:27.073 } 00:31:27.073 }, 00:31:27.073 { 00:31:27.073 "method": "bdev_nvme_set_options", 00:31:27.073 "params": { 00:31:27.073 "action_on_timeout": "none", 00:31:27.073 "timeout_us": 0, 00:31:27.073 "timeout_admin_us": 0, 00:31:27.073 "keep_alive_timeout_ms": 10000, 00:31:27.073 "arbitration_burst": 0, 00:31:27.073 "low_priority_weight": 0, 00:31:27.073 "medium_priority_weight": 0, 00:31:27.073 "high_priority_weight": 0, 00:31:27.073 "nvme_adminq_poll_period_us": 10000, 00:31:27.073 "nvme_ioq_poll_period_us": 0, 00:31:27.073 "io_queue_requests": 512, 00:31:27.073 "delay_cmd_submit": true, 00:31:27.074 "transport_retry_count": 4, 00:31:27.074 "bdev_retry_count": 3, 00:31:27.074 "transport_ack_timeout": 0, 00:31:27.074 "ctrlr_loss_timeout_sec": 0, 00:31:27.074 "reconnect_delay_sec": 0, 00:31:27.074 "fast_io_fail_timeout_sec": 0, 00:31:27.074 "disable_auto_failback": false, 00:31:27.074 "generate_uuids": false, 00:31:27.074 "transport_tos": 0, 00:31:27.074 "nvme_error_stat": false, 00:31:27.074 "rdma_srq_size": 0, 00:31:27.074 "io_path_stat": false, 00:31:27.074 "allow_accel_sequence": false, 00:31:27.074 "rdma_max_cq_size": 0, 00:31:27.074 "rdma_cm_event_timeout_ms": 0, 00:31:27.074 "dhchap_digests": [ 00:31:27.074 "sha256", 00:31:27.074 "sha384", 00:31:27.074 "sha512" 00:31:27.074 ], 00:31:27.074 "dhchap_dhgroups": [ 00:31:27.074 "null", 00:31:27.074 "ffdhe2048", 00:31:27.074 "ffdhe3072", 00:31:27.074 "ffdhe4096", 00:31:27.074 "ffdhe6144", 00:31:27.074 "ffdhe8192" 00:31:27.074 ] 00:31:27.074 } 00:31:27.074 }, 00:31:27.074 { 00:31:27.074 "method": "bdev_nvme_attach_controller", 00:31:27.074 "params": { 00:31:27.074 "name": "nvme0", 00:31:27.074 "trtype": "TCP", 00:31:27.074 "adrfam": "IPv4", 00:31:27.074 "traddr": "127.0.0.1", 00:31:27.074 "trsvcid": "4420", 00:31:27.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.074 "prchk_reftag": false, 00:31:27.074 "prchk_guard": false, 00:31:27.074 "ctrlr_loss_timeout_sec": 0, 00:31:27.074 "reconnect_delay_sec": 0, 00:31:27.074 "fast_io_fail_timeout_sec": 0, 00:31:27.074 "psk": "key0", 00:31:27.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.074 "hdgst": false, 00:31:27.074 "ddgst": false 00:31:27.074 } 00:31:27.074 }, 00:31:27.074 { 00:31:27.074 "method": "bdev_nvme_set_hotplug", 00:31:27.074 "params": { 00:31:27.074 "period_us": 100000, 00:31:27.074 "enable": false 00:31:27.074 } 00:31:27.074 }, 00:31:27.074 { 00:31:27.074 "method": "bdev_wait_for_examine" 00:31:27.074 } 00:31:27.074 ] 00:31:27.074 }, 00:31:27.074 { 00:31:27.074 "subsystem": "nbd", 00:31:27.074 "config": [] 00:31:27.074 } 00:31:27.074 ] 00:31:27.074 }' 00:31:27.333 [2024-04-26 16:14:06.790359] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:27.333 [2024-04-26 16:14:06.790454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2650530 ] 00:31:27.333 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.333 [2024-04-26 16:14:06.896301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.593 [2024-04-26 16:14:07.120749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.162 [2024-04-26 16:14:07.571493] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:28.162 16:14:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:28.162 16:14:07 -- common/autotest_common.sh@850 -- # return 0 00:31:28.162 16:14:07 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:28.162 16:14:07 -- keyring/file.sh@120 -- # jq length 00:31:28.162 16:14:07 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.421 16:14:07 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:28.421 16:14:07 -- keyring/file.sh@121 -- # get_refcnt key0 00:31:28.421 16:14:07 -- keyring/common.sh@12 -- # get_key key0 00:31:28.421 16:14:07 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:28.421 16:14:07 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:28.421 16:14:07 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:28.421 16:14:07 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.421 16:14:08 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:28.421 16:14:08 -- keyring/file.sh@122 -- # get_refcnt key1 00:31:28.421 16:14:08 -- keyring/common.sh@12 -- # get_key key1 00:31:28.421 16:14:08 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:28.421 16:14:08 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:28.421 16:14:08 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:28.421 16:14:08 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.681 16:14:08 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:28.681 16:14:08 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:28.681 16:14:08 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:28.681 16:14:08 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:28.941 16:14:08 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:28.941 16:14:08 -- keyring/file.sh@1 -- # cleanup 00:31:28.941 16:14:08 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.X58vfgNjQU /tmp/tmp.Gvwb4qp0Mi 00:31:28.941 16:14:08 -- keyring/file.sh@20 -- # killprocess 2650530 00:31:28.941 16:14:08 -- common/autotest_common.sh@936 -- # '[' -z 2650530 ']' 00:31:28.941 16:14:08 -- common/autotest_common.sh@940 -- # kill -0 2650530 00:31:28.941 16:14:08 -- common/autotest_common.sh@941 -- # uname 00:31:28.941 16:14:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:28.941 16:14:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2650530 00:31:28.941 16:14:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:28.941 16:14:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:28.941 16:14:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2650530' 00:31:28.941 killing process with pid 2650530 00:31:28.941 16:14:08 -- common/autotest_common.sh@955 -- # kill 2650530 00:31:28.941 Received shutdown signal, test time was about 1.000000 seconds 00:31:28.941 00:31:28.941 Latency(us) 00:31:28.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.941 =================================================================================================================== 00:31:28.941 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:28.941 16:14:08 -- common/autotest_common.sh@960 -- # wait 2650530 00:31:29.880 16:14:09 -- keyring/file.sh@21 -- # killprocess 2648684 00:31:29.880 16:14:09 -- common/autotest_common.sh@936 -- # '[' -z 2648684 ']' 00:31:29.880 16:14:09 -- common/autotest_common.sh@940 -- # kill -0 2648684 00:31:29.880 16:14:09 -- common/autotest_common.sh@941 -- # uname 00:31:29.880 16:14:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:29.880 16:14:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2648684 00:31:29.880 16:14:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:29.880 16:14:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:29.880 16:14:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2648684' 00:31:29.880 killing process with pid 2648684 00:31:29.880 16:14:09 -- common/autotest_common.sh@955 -- # kill 2648684 00:31:29.880 [2024-04-26 16:14:09.501471] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:29.880 16:14:09 -- common/autotest_common.sh@960 -- # wait 2648684 00:31:32.418 00:31:32.418 real 0m16.228s 00:31:32.418 user 0m33.411s 00:31:32.418 sys 0m2.976s 00:31:32.418 16:14:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:32.418 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:31:32.418 ************************************ 00:31:32.418 END TEST keyring_file 00:31:32.418 ************************************ 00:31:32.418 16:14:11 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:31:32.418 16:14:11 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:31:32.418 16:14:11 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:31:32.418 16:14:11 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:31:32.418 16:14:11 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:31:32.418 16:14:11 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:31:32.418 16:14:11 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:31:32.418 16:14:11 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:31:32.418 16:14:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:32.418 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:31:32.418 16:14:11 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:31:32.418 16:14:11 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:31:32.418 16:14:11 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:31:32.418 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:31:36.606 INFO: APP EXITING 00:31:36.606 INFO: killing all VMs 00:31:36.606 INFO: killing vhost app 00:31:36.606 INFO: EXIT DONE 00:31:39.145 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:39.145 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:39.145 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:39.405 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:39.405 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:39.405 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:39.405 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:31:41.944 Cleaning 00:31:41.944 Removing: /var/run/dpdk/spdk0/config 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:41.944 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:41.944 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:41.944 Removing: /var/run/dpdk/spdk1/config 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:41.944 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:41.944 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:41.944 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:41.944 Removing: /var/run/dpdk/spdk2/config 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:41.944 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:41.944 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:41.944 Removing: /var/run/dpdk/spdk3/config 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:41.944 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:41.944 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:41.944 Removing: /var/run/dpdk/spdk4/config 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:41.944 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:41.944 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:41.944 Removing: /dev/shm/bdev_svc_trace.1 00:31:41.944 Removing: /dev/shm/nvmf_trace.0 00:31:41.944 Removing: /dev/shm/spdk_tgt_trace.pid2255801 00:31:41.944 Removing: /var/run/dpdk/spdk0 00:31:41.944 Removing: /var/run/dpdk/spdk1 00:31:41.944 Removing: /var/run/dpdk/spdk2 00:31:41.944 Removing: /var/run/dpdk/spdk3 00:31:41.944 Removing: /var/run/dpdk/spdk4 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2251789 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2253333 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2255801 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2256948 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2258352 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2258938 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2260280 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2260516 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2261335 00:31:41.944 Removing: /var/run/dpdk/spdk_pid2263064 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2264575 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2265520 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2266307 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2267076 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2267839 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2268105 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2268591 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2268993 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2270105 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2273838 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2274574 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2275307 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2275534 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2277413 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2277646 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2279485 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2279695 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2280263 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2280497 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2281061 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2281288 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2282897 00:31:42.203 Removing: /var/run/dpdk/spdk_pid2283215 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2283563 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2284297 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2284738 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2285054 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2285543 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2286032 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2286514 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2287000 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2287483 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2287969 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2288462 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2288940 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2289430 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2289916 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2290399 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2290887 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2291376 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2291855 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2292344 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2292830 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2293315 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2293806 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2294294 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2294763 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2295094 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2296037 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2300221 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2346715 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2351245 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2360898 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2366428 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2370896 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2371464 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2383898 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2383901 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2384819 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2385696 00:31:42.204 Removing: /var/run/dpdk/spdk_pid2386529 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2387125 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2387167 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2387552 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2387593 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2387603 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2388515 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2389431 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2390352 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2390938 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2391042 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2391280 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2393093 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2394545 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2403742 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2404216 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2408948 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2415056 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2417876 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2428923 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2438119 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2440020 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2441257 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2459376 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2463619 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2468355 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2469956 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2472025 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2472391 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2472730 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2473085 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2473978 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2475997 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2477467 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2478409 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2480960 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2481919 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2483156 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2488117 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2498433 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2502674 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2509009 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2511248 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2513712 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2518657 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2522976 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2531028 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2531031 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2536495 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2536732 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2536960 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2537421 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2537433 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2541908 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2542484 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2547220 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2550023 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2555875 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2561454 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2568907 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2568983 00:31:42.464 Removing: /var/run/dpdk/spdk_pid2587693 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2588613 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2589322 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2590246 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2591463 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2592257 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2593081 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2593788 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2598478 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2598967 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2605292 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2605573 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2608050 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2616013 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2616136 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2621286 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2623555 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2626137 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2627413 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2629627 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2631102 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2640090 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2640668 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2641230 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2643931 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2644398 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2644864 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2648684 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2648924 00:31:42.724 Removing: /var/run/dpdk/spdk_pid2650530 00:31:42.724 Clean 00:31:42.983 16:14:22 -- common/autotest_common.sh@1437 -- # return 0 00:31:42.983 16:14:22 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:31:42.983 16:14:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:42.983 16:14:22 -- common/autotest_common.sh@10 -- # set +x 00:31:42.983 16:14:22 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:31:42.983 16:14:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:42.983 16:14:22 -- common/autotest_common.sh@10 -- # set +x 00:31:42.983 16:14:22 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:42.983 16:14:22 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:42.983 16:14:22 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:42.983 16:14:22 -- spdk/autotest.sh@389 -- # hash lcov 00:31:42.983 16:14:22 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:42.983 16:14:22 -- spdk/autotest.sh@391 -- # hostname 00:31:42.983 16:14:22 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:43.242 geninfo: WARNING: invalid characters removed from testname! 00:32:05.296 16:14:42 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:05.296 16:14:44 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:06.673 16:14:46 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:08.577 16:14:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:10.483 16:14:49 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:12.409 16:14:51 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:13.788 16:14:53 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:13.788 16:14:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.788 16:14:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:13.788 16:14:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.789 16:14:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.789 16:14:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.789 16:14:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.789 16:14:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.789 16:14:53 -- paths/export.sh@5 -- $ export PATH 00:32:13.789 16:14:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.789 16:14:53 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:13.789 16:14:53 -- common/autobuild_common.sh@435 -- $ date +%s 00:32:13.789 16:14:53 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714140893.XXXXXX 00:32:14.048 16:14:53 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714140893.lOdeGI 00:32:14.048 16:14:53 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:32:14.048 16:14:53 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:32:14.048 16:14:53 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:14.048 16:14:53 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:14.048 16:14:53 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:14.048 16:14:53 -- common/autobuild_common.sh@451 -- $ get_config_params 00:32:14.048 16:14:53 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:32:14.048 16:14:53 -- common/autotest_common.sh@10 -- $ set +x 00:32:14.048 16:14:53 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user' 00:32:14.048 16:14:53 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:32:14.048 16:14:53 -- pm/common@17 -- $ local monitor 00:32:14.048 16:14:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.048 16:14:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2660735 00:32:14.048 16:14:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.048 16:14:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2660737 00:32:14.048 16:14:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.048 16:14:53 -- pm/common@21 -- $ date +%s 00:32:14.048 16:14:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2660739 00:32:14.048 16:14:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.049 16:14:53 -- pm/common@21 -- $ date +%s 00:32:14.049 16:14:53 -- pm/common@21 -- $ date +%s 00:32:14.049 16:14:53 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2660742 00:32:14.049 16:14:53 -- pm/common@26 -- $ sleep 1 00:32:14.049 16:14:53 -- pm/common@21 -- $ date +%s 00:32:14.049 16:14:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714140893 00:32:14.049 16:14:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714140893 00:32:14.049 16:14:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714140893 00:32:14.049 16:14:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714140893 00:32:14.049 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714140893_collect-cpu-temp.pm.log 00:32:14.049 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714140893_collect-vmstat.pm.log 00:32:14.049 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714140893_collect-bmc-pm.bmc.pm.log 00:32:14.049 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714140893_collect-cpu-load.pm.log 00:32:14.987 16:14:54 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:32:14.987 16:14:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:32:14.987 16:14:54 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:14.987 16:14:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:14.987 16:14:54 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:14.987 16:14:54 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:14.987 16:14:54 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:14.987 16:14:54 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:14.987 16:14:54 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:14.987 16:14:54 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:14.987 16:14:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:14.987 16:14:54 -- pm/common@30 -- $ signal_monitor_resources TERM 00:32:14.987 16:14:54 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:32:14.987 16:14:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.987 16:14:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:14.987 16:14:54 -- pm/common@45 -- $ pid=2660754 00:32:14.987 16:14:54 -- pm/common@52 -- $ sudo kill -TERM 2660754 00:32:14.987 16:14:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.987 16:14:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:14.987 16:14:54 -- pm/common@45 -- $ pid=2660755 00:32:14.987 16:14:54 -- pm/common@52 -- $ sudo kill -TERM 2660755 00:32:14.987 16:14:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.987 16:14:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:14.987 16:14:54 -- pm/common@45 -- $ pid=2660749 00:32:14.987 16:14:54 -- pm/common@52 -- $ sudo kill -TERM 2660749 00:32:14.987 16:14:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:14.987 16:14:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:14.987 16:14:54 -- pm/common@45 -- $ pid=2660757 00:32:14.987 16:14:54 -- pm/common@52 -- $ sudo kill -TERM 2660757 00:32:14.987 + [[ -n 2147527 ]] 00:32:14.987 + sudo kill 2147527 00:32:15.256 [Pipeline] } 00:32:15.275 [Pipeline] // stage 00:32:15.280 [Pipeline] } 00:32:15.298 [Pipeline] // timeout 00:32:15.303 [Pipeline] } 00:32:15.320 [Pipeline] // catchError 00:32:15.326 [Pipeline] } 00:32:15.343 [Pipeline] // wrap 00:32:15.350 [Pipeline] } 00:32:15.369 [Pipeline] // catchError 00:32:15.378 [Pipeline] stage 00:32:15.380 [Pipeline] { (Epilogue) 00:32:15.395 [Pipeline] catchError 00:32:15.397 [Pipeline] { 00:32:15.412 [Pipeline] echo 00:32:15.414 Cleanup processes 00:32:15.420 [Pipeline] sh 00:32:15.702 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:15.702 2660873 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:15.702 2661158 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:15.716 [Pipeline] sh 00:32:16.001 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:16.001 ++ grep -v 'sudo pgrep' 00:32:16.001 ++ awk '{print $1}' 00:32:16.001 + sudo kill -9 2660873 00:32:16.012 [Pipeline] sh 00:32:16.292 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:26.276 [Pipeline] sh 00:32:26.557 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:26.557 Artifacts sizes are good 00:32:26.570 [Pipeline] archiveArtifacts 00:32:26.577 Archiving artifacts 00:32:26.735 [Pipeline] sh 00:32:27.033 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:27.049 [Pipeline] cleanWs 00:32:27.060 [WS-CLEANUP] Deleting project workspace... 00:32:27.060 [WS-CLEANUP] Deferred wipeout is used... 00:32:27.066 [WS-CLEANUP] done 00:32:27.067 [Pipeline] } 00:32:27.086 [Pipeline] // catchError 00:32:27.098 [Pipeline] sh 00:32:27.378 + logger -p user.info -t JENKINS-CI 00:32:27.387 [Pipeline] } 00:32:27.405 [Pipeline] // stage 00:32:27.411 [Pipeline] } 00:32:27.423 [Pipeline] // node 00:32:27.427 [Pipeline] End of Pipeline 00:32:27.448 Finished: SUCCESS